Updates from: 05/06/2023 01:10:18
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Manage User Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-access.md
The following is an example of a user flow for gathering parental consent:
2. The application processes the JSON token and shows a screen to the minor, notifying them that parental consent is required and requesting the consent of a parent online.
-3. Azure AD B2C shows a sign-in journey that the user can sign in to normally and issues a token to the application that is set to include **legalAgeGroupClassification = "minorWithParentalConsent"**. The application collects the email address of the parent and verifies that the parent is an adult. To do so, it uses a trusted source, such as a national ID office, license verification, or credit card proof. If verification is successful, the application prompts the minor to sign in by using the Azure AD B2C user flow. If consent is denied (for example, if **legalAgeGroupClassification = "minorWithoutParentalConsent"**), Azure AD B2C returns a JSON token (not a login) to the application to restart the consent process. It is optionally possible to customize the user flow so that a minor or an adult can regain access to a minor's account by sending a registration code to the minor's email address or the adult's email address on record.
+3. Azure AD B2C shows a sign-in journey that the user can sign in to normally and issues a token to the application that is set to include **legalAgeGroupClassification = "minorWithParentalConsent"**. The application collects the email address of the parent and verifies that the parent is an adult. To do so, it uses a trusted source, such as a national/regional ID office, license verification, or credit card proof. If verification is successful, the application prompts the minor to sign in by using the Azure AD B2C user flow. If consent is denied (for example, if **legalAgeGroupClassification = "minorWithoutParentalConsent"**), Azure AD B2C returns a JSON token (not a login) to the application to restart the consent process. It is optionally possible to customize the user flow so that a minor or an adult can regain access to a minor's account by sending a registration code to the minor's email address or the adult's email address on record.
4. The application offers an option to the minor to revoke consent.
active-directory Check Status User Account Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/check-status-user-account-provisioning.md
Previously updated : 05/04/2023 Last updated : 05/05/2023
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration.md
For details about the decision tree below, read [MSAL.NET or Microsoft.Identity.
[See examples](https://identitydivision.visualstudio.com/DevEx/_wiki/wikis/DevEx.wiki/20413/1P-ADAL.NET-to-MSAL.NET-migration-examples) of other 1P teams who have already, or are currently, migrating from ADAL to one of the MSAL+ solutions above. See their code, and in some cases read about their migration story. -->
-
+### Deprecated ADAL.Net Nuget packages and their MSAL.Net equivalents
+You might unknowingly consume ADAL dependencies from other Azure SDKs. Below are few of the deprecated packages and their MSAL alternatives.
+
+| ADAL.NET Package (Deprecated) | MSAL.NET Package (Current) |
+| -- | -- |
+| `Microsoft.Azure.KeyVault`| `Azure.Security.KeyVault.Secrets, Azure.Security.KeyVault.Keys, Azure.Security.KeyVault.Certificates`|
+| `Microsoft.Azure.Management.Compute`| `Azure.ResourceManager.Compute`|
+| `Microsoft.Azure.Services.AppAuthentication`| `Azure.Identity`|
+| `Microsoft.Azure.Management.StorageSync`| `Azure.ResourceManager.StorageSync`|
+| `Microsoft.Azure.Management.Fluent`| `Azure.ResourceManager`|
+| `Microsoft.Azure.Management.EventGrid`| `Azure.ResourceManager.EventGrid`|
+| `Microsoft.Azure.Management.Automation`| `Azure.ResourceManager.Automation`|
+| `Microsoft.Azure.Management.Compute.Fluent`| `Azure.ResourceManager.Compute`|
+| `Microsoft.Azure.Management.MachineLearning.Fluent`| `Azure.ResourceManager.MachineLearningCompute`|
+| `Microsoft.Azure.Management.Media, windowsazure.mediaservices`| `Azure.ResourceManager.Media`|
+ ## Next steps - Learn about [public client and confidential client applications](msal-client-applications.md).
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
Title: "Quickstart: Create an Azure Active Directory tenant" description: In this quickstart, you learn how to create an Azure Active Directory tenant for use in developing applications that use the Microsoft identity platform for authentication and authorization. -+ Previously updated : 02/17/2023 Last updated : 04/19/2023
To begin building external facing applications that sign in social and local acc
## Next steps > [!div class="nextstepaction"]
-> [Register an app](quickstart-register-app.md) to integrate with Microsoft identity platform.
+> [Register an app](quickstart-register-app.md)
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
Previously updated : 03/10/2023 Last updated : 04/19/2023
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Previously updated : 11/16/2022 Last updated : 04/28/2023 # Customer intent: As an application developer, I want a quick introduction to the Microsoft identity platform so I can decide if this platform meets my application development requirements.
Choose your preferred [application scenario](authentication-flows-app-scenarios.
- [Daemon app](scenario-daemon-overview.md) - [Mobile app](scenario-mobile-overview.md)
+For a more in-depth look at building applications using the Microsoft identity platform, see our multipart tutorial series for the following applications:
+
+- [React Single-page app (SPA)](single-page-app-tutorial-01-register-app.md)
+- [.NET Web app](web-app-tutorial-01-register-application.md)
+- [.NET Web API](web-api-tutorial-01-register-app.md)
+ As you work with the Microsoft identity platform to integrate authentication and authorization in your apps, you can refer to this image that outlines the most common app scenarios and their identity components. Select the image to view it full-size. [![Metro map showing several application scenarios in Microsoft identity platform](./media/v2-overview/application-scenarios-identity-platform.png)](./media/v2-overview/application-scenarios-identity-platform.png#lightbox)
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
As you plan the governance of external access to your content, consider content,
To define High, Medium, or Low Business Impact (HBI, MBI, LBI) for data, sites, and groups, consider the effect on your organization if the wrong content types are shared.
-* Credit card, passport, national-ID numbers
+* Credit card, passport, national/regional ID numbers
* [Apply a sensitivity label to content automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide&preserve-view=true) * Content created by corporate officers: compliance, finance, executive, etc. * Strategic or financial data in libraries or sites.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Workload Identity Federation enables developers to use managed identities for th
For more information, see: - [Workload identity federation](../workload-identities/workload-identity-federation.md). - [Configure a user-assigned managed identity to trust an external identity provider (preview)](../workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md)-- [Use Azure AD workload identity (preview) with Azure Kubernetes Service (AKS)](../../aks/workload-identity-overview.md)
+- [Use Azure AD workload identity with Azure Kubernetes Service (AKS)](../../aks/workload-identity-overview.md)
active-directory Cisco Unity Connection Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-unity-connection-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Cisco Unity Connection
+description: Learn how to configure single sign-on between Azure Active Directory and Cisco Unity Connection.
++++++++ Last updated : 05/05/2023++++
+# Azure Active Directory SSO integration with Cisco Unity Connection
+
+In this article, you learn how to integrate Cisco Unity Connection with Azure Active Directory (Azure AD). Cisco Unity Connection is a robust unified messaging and voicemail solution that provides users with flexible message access options including support for voice commands, STT transcriptions etc. When you integrate Cisco Unity Connection with Azure AD, you can:
+
+* Control in Azure AD who has access to Cisco Unity Connection.
+* Enable your users to be automatically signed-in to Cisco Unity Connection with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Cisco Unity Connection in a test environment. Cisco Unity Connection supports **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Cisco Unity Connection, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cisco Unity Connection single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Cisco Unity Connection application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Cisco Unity Connection from the Azure AD gallery
+
+Add Cisco Unity Connection from the Azure AD application gallery to configure single sign-on with Cisco Unity Connection. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Cisco Unity Connection** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file** then perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows how to upload metadata file.](common/upload-metadata.png "File")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows to choose and browse metadata file.](common/browse-upload-metadata.png "Folder")
+
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
+
+ d. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<FQDN_CUC_node>`
+
+ > [!Note]
+ > You will get the **Service Provider metadata file** from the [Cisco Unity Connection support team](mailto:unity-tme@cisco.com). If the **Identifier** and **Reply URL** values do not get auto populated, then fill the values manually according to your requirement.
+
+1. Cisco Unity Connection application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Cisco Unity Connection application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | uid | user.onpremisessamaccountname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Cisco Unity Connection** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Cisco Unity Connection SSO
+
+To configure single sign-on on **Cisco Unity Connection** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Cisco Unity Connection support team](mailto:unity-tme@cisco.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Cisco Unity Connection test user
+
+In this section, you create a user called Britta Simon in Cisco Unity Connection. Work with [Cisco Unity Connection support team](mailto:unity-tme@cisco.com) to add the users in the Cisco Unity Connection platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Cisco Unity Connection Sign-on URL where you can initiate the login flow.
+
+* Go to Cisco Unity Connection Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Cisco Unity Connection tile in the My Apps, this will redirect to Cisco Unity Connection Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Cisco Unity Connection you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Snowflake Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
This tutorial demonstrates the steps that you perform in Snowflake and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users and groups to [Snowflake](https://www.Snowflake.com/pricing/). For important details on what this service does, how it works, and frequently asked questions, see [What is automated SaaS app user provisioning in Azure AD?](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This connector is currently in public preview. For information about terms of use, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Capabilities supported > [!div class="checklist"]
active-directory Hipaa Configure Azure Active Directory For Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-configure-azure-active-directory-for-compliance.md
Microsoft services such as Azure Active Directory (Azure AD) can help you meet identity-related requirements for the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
-The HIPAA Security Rule (HSR) establishes national standards to protect individualsΓÇÖ electronic personal health information that is created, received, used, or maintained by a covered entity. The HSR is managed by the U.S. Department of Health and Human Services (HHS) and requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information.
+The HIPAA Security Rule (HSR) establishes standards to protect individualsΓÇÖ electronic personal health information that is created, received, used, or maintained by a covered entity. The HSR is managed by the U.S. Department of Health and Human Services (HHS) and requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information.
Technical safeguards requirements and objectives are defined in Title 45 of the Code of Federal Regulations (CFRs). Part 160 of Title 45 provides the general administrative requirements, and Part 164ΓÇÖs subparts A and C describe the security and privacy requirements.
active-directory Remote Onboarding New Employees Id Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/remote-onboarding-new-employees-id-verification.md
Enterprises onboarding users face significant challenges onboarding remote users
- An Azure AD account should be pre-created for every user. The account should be used as part of the site's request validation process. - Administrators frequently deal with discrepancies between users' information held in a company's IT systems, like human resource applications or identity management solutions, and the information the users provide. For example, an employee might have ΓÇ£JamesΓÇ¥ as their first name but their profile has their name as ΓÇ£JimΓÇ¥. For those scenarios: 1. At the beginning of the HR process, candidates must use their name exactly as it appears in government issued documents. Taking this approach simplifies validation logic.
- 1. Design validation logic to include attributes that are more likely to have an exact match against the HR system. Common attributes include street address, date of birth, nationality, national identification number (if applicable), in addition to first and last name.
+ 1. Design validation logic to include attributes that are more likely to have an exact match against the HR system. Common attributes include street address, date of birth, nationality, national/regional identification number (if applicable), in addition to first and last name.
1. As a fallback, plan for human review to work through ambiguous/non-conclusive results. This process might include temporarily storing the attributes presented in the VC, phone call with the user, etc. - Multinational organizations, may need to work with different identity proofing partners based on the region of the user. - Assume that the initial interaction between the user and the onboarding partner is untrusted. The onboarding portal should generate detailed logs for all requests processed that could be used for auditing purposes.
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 01/18/2023 Last updated : 05/04/2023 # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
Before you can use an Azure Files file share as a Kubernetes volume, you must cr
5. Run the following command to export the storage account key as an environment variable. ```azurecli-interactive
- STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
+ STORAGE_KEY=$(az storage account keys list --resource-group nodeResourceGroupName --account-name myAKSStorageAccount --query "[0].value" -o tsv)
``` 6. Run the following commands to echo the storage account name and key. Copy this information as these values are needed when you create the Kubernetes volume later in this article. ```azurecli-interactive
- echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY ```
Kubernetes needs credentials to access the file share created in the previous st
Use the `kubectl create secret` command to create the secret. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key. ```bash
-kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
+kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=myAKSStorageAccount --from-literal=azurestorageaccountkey=$STORAGE_KEY
``` ### Mount file share as an inline volume
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Customizing your node configuration allows you to adjust operating system (OS) s
Before you begin, make sure you have an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You also need to register the feature flag using the following steps: + 1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command. ```azurecli
Kubelet custom configuration is supported for Linux and Windows node pools. Supp
| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. | | `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). | | `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
-| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 50 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. | | `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod |
For agent nodes, which are expected to handle very large numbers of concurrent s
| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. | | `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. | | `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
-| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
+| `net.ipv4.tcp_keepalive_intvl` | 10 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. | | `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. | | `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Title: Deploy an Azure container offer from Azure Marketplace
-description: Learn how to deploy Azure container offers from Azure Marketplace on an Azure Kubernetes Service (AKS) cluster.
+ Title: Deploy a Kubernetes application from Azure Marketplace
+description: Learn how to deploy Kubernetes applications from Azure Marketplace on an Azure Kubernetes Service (AKS) cluster.
Previously updated : 09/30/2022 Last updated : 05/01/2023
-# Deploy a container offer from Azure Marketplace (preview)
+# Deploy a Kubernetes application from Azure Marketplace (preview)
[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners.
This feature is currently supported only in the following regions:
- Australia East - Central India
-Kubernetes application-based container offers cannot be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
+Kubernetes application-based container offers can't be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
## Register resource providers
az provider register --namespace Microsoft.KubernetesConfiguration --wait
## Select and deploy a Kubernetes offer
+### From the AKS portal screen
+
+1. In the [Azure portal](https://portal.azure.com/), you can deploy a Kubernetes application from an existing cluster by navigating to **Marketplace** or selecting **Extensions + applications**, then selecting **+ Add**.
+
+ :::image type="content" source="./media/deploy-marketplace/add-inline.png" alt-text="The Azure portal page for the A K S cluster is shown. 'Extensions + Applications' is selected, and '+ Add' is highlighted." lightbox="./media/deploy-marketplace/add.png":::
+
+1. You can search for an offer or publisher directly by name, or you can browse all offers.
+
+ :::image type="content" source="./media/deploy-marketplace/marketplace-view-inline.png" alt-text="Screenshot of Kubernetes offers in the Azure portal." lightbox="./media/deploy-marketplace/marketplace-view.png":::
+
+1. After you decide on an application, select the offer.
+
+1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**.
+
+ :::image type="content" source="./media/deploy-marketplace/plan-pricing.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, showing plan and pricing information.":::
+
+1. Follow each page in the wizard, all the way through Review + Create. Fill in information for your resource group, your cluster, and any configuration options that the application requires. You can decide to deploy on a new AKS cluster or use an existing cluster.
+
+ :::image type="content" source="./media/deploy-marketplace/review-create.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a cluster or using an existing one.":::
+
+1. When the application is deployed, the portal shows your deployment in progress, along with details.
+
+ :::image type="content" source="./media/deploy-marketplace/deploying.png" alt-text="Screenshot of the Azure portal deployments screen, showing that the Kubernetes offer is currently being deployed.":::
+
+### From the Marketplace portal screen
+ 1. In the [Azure portal](https://portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**. 1. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, on the left side under **Categories** select **Containers**.
az provider register --namespace Microsoft.KubernetesConfiguration --wait
> [!IMPORTANT] > The **Containers** category includes both Kubernetes applications and standalone container images. This walkthrough is specific to Kubernetes applications. If you find that the steps to deploy an offer differ in some way, you're most likely trying to deploy a container image-based offer instead of a Kubernetes application-based offer.
-1. You will see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select **See more**.
+1. You'll see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select **See more**.
- :::image type="content" source="./media/deploy-marketplace/see-more-inline.png" alt-text="Screenshot of Azure Marketplace K8s offers in the Azure portal" lightbox="./media/deploy-marketplace/see-more.png":::
+ :::image type="content" source="./media/deploy-marketplace/see-more-inline.png" alt-text="Screenshot of Azure Marketplace K8s offers in the Azure portal. 'See More' is highlighted." lightbox="./media/deploy-marketplace/see-more.png":::
1. After you decide on an application, select the offer.
az provider register --namespace Microsoft.KubernetesConfiguration --wait
:::image type="content" source="./media/deploy-marketplace/deployment-inline.png" alt-text="Screenshot of the Azure portal that shows a successful resource deployment to the cluster." lightbox="./media/deploy-marketplace/deployment-full.png":::
-1. Verify the deployment by using the following command to list the extensions that are running on your cluster:
+## Verify the deployment
+
+### [Azure CLI](#tab/azure-cli)
+
+Verify the deployment by using the following command to list the extensions that are running on your cluster:
```azurecli-interactive az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ```
+### [Portal](#tab/azure-portal)
+
+Verify the deployment navigating to the cluster you recently installed the extension on, then navigate to "Extensions + Applications", where you'll see the extension status:
+
+ :::image type="content" source="./media/deploy-marketplace/verify-inline.png" lightbox="./media/deploy-marketplace/verify.png" alt-text="The Azure portal page for the A K S cluster is shown. 'Extensions + Applications' is selected, and the deployed extension is listed.":::
+++ ## Manage the offer lifecycle For lifecycle management, an Azure Kubernetes offer is represented as a cluster extension for AKS. For more information, seeΓÇ»[Cluster extensions for AKS][cluster-extensions].
-Purchasing an offer from Azure Marketplace creates a new instance of the extension on your AKS cluster. You can view the extension instance from the cluster by using the following command:
+Purchasing an offer from Azure Marketplace creates a new instance of the extension on your AKS cluster.
+
+### [Azure CLI](#tab/azure-cli)
+
+You can view the extension instance from the cluster by using the following command:
```azurecli-interactive az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ```
+### [Portal](#tab/azure-portal)
+
+First, navigate to an existing cluster, then select "Extensions + applications":
++
+You'll see your recently installed extensions listed:
++
+Select an extension name to navigate to a properties view where you're able to disable auto upgrades, check the provisioning state, delete the extension instance, or modify configuration settings as needed.
++++ ## Monitor billing and usage information To monitor billing and usage information for the offer that you deployed:
To monitor billing and usage information for the offer that you deployed:
## Remove an offer
-You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. For example:
+You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster.
+
+### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ```
+### [Portal](#tab/azure-portal)
+
+Select an application, then select the uninstall button to remove the extension from your cluster:
++++ ## Troubleshooting If you experience issues, see the [troubleshooting checklist for failed deployments of a Kubernetes offer][marketplace-troubleshoot].
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade a cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 11/15/2022 Last updated : 05/04/2023 #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features. # Tutorial: Upgrade Kubernetes in Azure Kubernetes Service (AKS)
-As part of the application and cluster lifecycle, you may want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster by using the Azure CLI, Azure PowerShell, or the Azure portal.
+As part of the application and cluster lifecycle, you may want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster using the Azure CLI, Azure PowerShell, or the Azure portal.
In this tutorial, part seven of seven, you learn how to:
In this tutorial, part seven of seven, you learn how to:
## Before you begin
-In previous tutorials, an application was packaged into a container image, and this container image was uploaded to Azure Container Registry (ACR). You also created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you packaged an application into a container image and uploaded the container image to Azure Container Registry (ACR). You also created an AKS cluster and deployed an application to it. If you haven't completed these steps and want to follow along with this tutorial, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
-* If you're using Azure CLI, this tutorial requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you're using Azure CLI, this tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
## Get available cluster versions ### [Azure CLI](#tab/azure-cli)
-Before you upgrade a cluster, use the [az aks get-upgrades][] command to check which Kubernetes releases are available.
+* Before you upgrade, check which Kubernetes releases are available for your cluster using the [`az aks get-upgrades`][az-aks-get-upgrades] command.
-```azurecli
-az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
-```
+ ```azurecli
+ az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
+ ```
-In the following example output, the current version is *1.18.10*, and the available versions are shown under *upgrades*.
+ The following example output shows the current version as *1.18.10* and lists the available versions under *upgrades*.
-```output
-{
- "agentPoolProfiles": null,
- "controlPlaneProfile": {
- "kubernetesVersion": "1.18.10",
- ...
- "upgrades": [
- {
- "isPreview": null,
- "kubernetesVersion": "1.19.1"
+ ```output
+ {
+ "agentPoolProfiles": null,
+ "controlPlaneProfile": {
+ "kubernetesVersion": "1.18.10",
+ ...
+ "upgrades": [
+ {
+ "isPreview": null,
+ "kubernetesVersion": "1.19.1"
+ },
+ {
+ "isPreview": null,
+ "kubernetesVersion": "1.19.3"
+ }
+ ]
},
- {
- "isPreview": null,
- "kubernetesVersion": "1.19.3"
- }
- ]
- },
- ...
-}
-```
+ ...
+ }
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Before you upgrade a cluster, use the [Get-AzAksCluster][get-azakscluster] cmdlet to check which Kubernetes version you're running and the region in which it resides.
+1. Before you upgrade, check which Kubernetes releases are available for your cluster and the region where your cluster resides using the [`Get-AzAksCluster`][get-azakscluster] cmdlet.
-```azurepowershell
-Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
- Select-Object -Property Name, KubernetesVersion, Location
-```
+ ```azurepowershell
+ Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Select-Object -Property Name, KubernetesVersion, Location
+ ```
-In the following example output, the current version is *1.19.9*.
+ The following example output shows the current version as *1.19.9* and the location as *eastus*.
-```output
-Name KubernetesVersion Location
-- -- --
-myAKSCluster 1.19.9 eastus
-```
+ ```output
+ Name KubernetesVersion Location
+ - -- --
+ myAKSCluster 1.19.9 eastus
+ ```
-Use the [Get-AzAksVersion][get-azaksversion] cmdlet to check which Kubernetes upgrade releases are available in the region where your AKS cluster resides.
+2. Check which Kubernetes upgrade releases are available in the region where your cluster resides using the [`Get-AzAksVersion`][get-azaksversion] cmdlet.
-```azurepowershell
-Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
-```
+ ```azurepowershell
+ Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
+ ```
-The available versions are shown under *OrchestratorVersion*.
+ The following example output shows the available versions under *OrchestratorVersion*.
-```output
-Default IsPreview OrchestratorType OrchestratorVersion
-- - -
- Kubernetes 1.20.2
- Kubernetes 1.20.5
-```
+ ```output
+ Default IsPreview OrchestratorType OrchestratorVersion
+ - - -
+ Kubernetes 1.20.2
+ Kubernetes 1.20.5
+ ```
### [Azure portal](#tab/azure-portal)
-To check which Kubernetes releases are available for your cluster:
+Check which Kubernetes releases are available for your cluster using the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to your AKS cluster.
If no upgrades are available, create a new cluster with a supported version of K
## Upgrade a cluster
-AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. The following steps are performed during this process:
+AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. During this process, AKS performs the following steps:
-1. The Kubernetes scheduler prevents additional pods from being scheduled on a node that is to be upgraded.
-1. Running pods on the node are scheduled on other nodes in the cluster.
-1. A new node is created that runs the latest Kubernetes components.
-1. When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on the new node.
-1. The old node is deleted, and the next node in the cluster begins the cordon and drain process.
+* Adds a new buffer node (or as many nodes as configured in [max surge](./upgrade-cluster.md#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+* [Cordons and drains][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it [cordons and drains][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.
+* When the old node is fully drained, it's reimaged to receive the new version and becomes the buffer node for the following node to be upgraded.
+* This process repeats until all nodes in the cluster have been upgraded.
+* At the end of the process, the last buffer node is deleted, maintaining the existing agent node count and zone balance.
[!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)] ### [Azure CLI](#tab/azure-cli)
-Use the [az aks upgrade][] command to upgrade your AKS cluster.
+* Upgrade your cluster using the [`az aks upgrade`][az-aks-upgrade] command.
-```azurecli
-az aks upgrade \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --kubernetes-version KUBERNETES_VERSION
-```
+ ```azurecli
+ az aks upgrade \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --kubernetes-version KUBERNETES_VERSION
+ ```
-> [!NOTE]
-> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
+ > [!NOTE]
+ > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-The following example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now reports *1.19.1*.
+ The following example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now shows *1.19.1*.
-```output
-{
- "agentPoolProfiles": [
+ ```output
{
- "count": 3,
- "maxPods": 110,
- "name": "nodepool1",
- "osType": "Linux",
- "storageProfile": "ManagedDisks",
- "vmSize": "Standard_DS1_v2",
+ "agentPoolProfiles": [
+ {
+ "count": 3,
+ "maxPods": 110,
+ "name": "nodepool1",
+ "osType": "Linux",
+ "storageProfile": "ManagedDisks",
+ "vmSize": "Standard_DS1_v2",
+ }
+ ],
+ "dnsPrefix": "myAKSClust-myResourceGroup-19da35",
+ "enableRbac": false,
+ "fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io",
+ "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
+ "kubernetesVersion": "1.19.1",
+ "location": "eastus",
+ "name": "myAKSCluster",
+ "type": "Microsoft.ContainerService/ManagedClusters"
}
- ],
- "dnsPrefix": "myAKSClust-myResourceGroup-19da35",
- "enableRbac": false,
- "fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io",
- "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
- "kubernetesVersion": "1.19.1",
- "location": "eastus",
- "name": "myAKSCluster",
- "type": "Microsoft.ContainerService/ManagedClusters"
-}
-```
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade your AKS cluster.
+* Upgrade your cluster using the [`Set-AzAksCluster`][set-azakscluster] cmdlet.
-```azurepowershell
-Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
-```
+ ```azurepowershell
+ Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
+ ```
-> [!NOTE]
-> You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-
-The following example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now reports *1.20.2*.
-
-```output
-ProvisioningState : Succeeded
-MaxAgentPools : 100
-KubernetesVersion : 1.20.2
-PrivateFQDN :
-AgentPoolProfiles : {default}
-Name : myAKSCluster
-Type : Microsoft.ContainerService/ManagedClusters
-Location : eastus
-Tags : {}
-```
+ > [!NOTE]
+ > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
+
+ The following example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now shows *1.20.2*.
+
+ ```output
+ ProvisioningState : Succeeded
+ MaxAgentPools : 100
+ KubernetesVersion : 1.20.2
+ PrivateFQDN :
+ AgentPoolProfiles : {default}
+ Name : myAKSCluster
+ Type : Microsoft.ContainerService/ManagedClusters
+ Location : eastus
+ Tags : {}
+ ```
### [Azure portal](#tab/azure-portal)
-To upgrade your AKS cluster:
+Upgrade your cluster using the following steps:
1. In the Azure portal, navigate to your AKS cluster. 2. Under **Settings**, select **Cluster configuration**.
It takes a few minutes to upgrade the cluster, depending on how many nodes you h
## View the upgrade events
-When you upgrade your cluster, the following Kubernetes events may occur on the nodes:
-
-* **Surge**: Create surge node.
-* **Drain**: Pods are being evicted from the node. Each pod has a *5 minute timeout* to complete the eviction.
-* **Update**: Update of a node has succeeded or failed.
-* **Delete**: Delete a surge node.
+> [!NOTE]
+> When you upgrade your cluster, the following Kubernetes events may occur on the nodes:
+>
+> * **Surge**: Create a surge node.
+> * **Drain**: Evict pods from the node. Each pod has a *five minute timeout* to complete the eviction.
+> * **Update**: Update of a node has succeeded or failed.
+> * **Delete**: Delete a surge node.
-Use `kubectl get events` to show events in the default namespaces while running an upgrade.
+* View the upgrade events in the default namespaces using the `kubectl get events` command.
-```azurecli-interactive
-kubectl get events
-```
+ ```azurecli-interactive
+ kubectl get events
+ ```
-The following example output shows some of the above events listed during an upgrade.
+ The following example output shows some of the above events listed during an upgrade.
-```output
-...
-default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
-...
-default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
-...
-```
+ ```output
+ ...
+ default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
+ ...
+ default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
+ ...
+ ```
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
### [Azure CLI](#tab/azure-cli)
-Confirm that the upgrade was successful using the [az aks show][] command.
+* Confirm the upgrade was successful using the [`az aks show`][az-aks-show] command.
-```azurecli
-az aks show --resource-group myResourceGroup --name myAKSCluster --output table
-```
+ ```azurecli
+ az aks show --resource-group myResourceGroup --name myAKSCluster --output table
+ ```
-The following example output shows the AKS cluster runs *KubernetesVersion 1.19.1*:
+ The following example output shows the AKS cluster runs *KubernetesVersion 1.19.1*:
-```output
-Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn
- - - - -
-myAKSCluster eastus myResourceGroup 1.19.1 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
-```
+ ```output
+ Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn
+ - - - -
+ myAKSCluster eastus myResourceGroup 1.19.1 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Confirm that the upgrade was successful using the [Get-AzAksCluster][get-azakscluster] cmdlet.
+* Confirm the upgrade was successful using the [`Get-AzAksCluster`][get-azakscluster] cmdlet.
-```azurepowershell
-Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
- Select-Object -Property Name, Location, KubernetesVersion, ProvisioningState
-```
+ ```azurepowershell
+ Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Select-Object -Property Name, Location, KubernetesVersion, ProvisioningState
+ ```
-The following example output shows the AKS cluster runs *KubernetesVersion 1.20.2*:
+ The following example output shows the AKS cluster runs *KubernetesVersion 1.20.2*:
-```output
-Name Location KubernetesVersion ProvisioningState
-- -- -- --
-myAKSCluster eastus 1.20.2 Succeeded
-```
+ ```output
+ Name Location KubernetesVersion ProvisioningState
+ - -- -- --
+ myAKSCluster eastus 1.20.2 Succeeded
+ ```
### [Azure portal](#tab/azure-portal)
-To confirm that the upgrade was successful, navigate to your AKS cluster in the Azure portal. On the **Overview** page, select the **Kubernetes version** and ensure it's the latest version you installed in the previous step.
+Confirm the upgrade was successful using the following steps:
+
+1. In the Azure portal, navigate to your AKS cluster.
+2. On the **Overview** page, select the **Kubernetes version** and ensure it's the latest version you installed in the previous step.
As this tutorial is the last part of the series, you may want to delete your AKS
### [Azure CLI](#tab/azure-cli)
-Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
+ ```azurecli-interactive
+ az group delete --name myResourceGroup --yes --no-wait
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet.
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name myResourceGroup
+ ```
### [Azure portal](#tab/azure-portal)
-To delete your AKS cluster:
+Delete your cluster using the following steps:
1. In the Azure portal, navigate to your AKS cluster. 2. On the **Overview** page, select **Delete**.
To delete your AKS cluster:
> [!NOTE]
-> When you delete the cluster, the Azure Active Directory (AAD) service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and it doesn't require that you provision or rotate any secrets.
+> When you delete the cluster, the Azure Active Directory (Azure AD) service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and doesn't require that you provision or rotate any secrets.
## Next steps In this tutorial, you upgraded Kubernetes in an AKS cluster. You learned how to: > [!div class="checklist"]
+>
> * Identify current and available Kubernetes versions. > * Upgrade your Kubernetes nodes. > * Validate a successful upgrade.
-For more information on AKS, see [AKS overview][aks-intro]. For guidance on how to create full solutions with AKS, see [AKS solution guidance][aks-solution-guidance].
+For more information on AKS, see the [AKS overview][aks-intro]. For guidance on how to create full solutions with AKS, see the [AKS solution guidance][aks-solution-guidance].
<!-- LINKS - external --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
For more information on AKS, see [AKS overview][aks-intro]. For guidance on how
<!-- LINKS - internal --> [aks-intro]: ./intro-kubernetes.md [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
-[az aks show]: /cli/azure/aks#az_aks_show
-[az aks get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
-[az aks upgrade]: /cli/azure/aks#az_aks_upgrade
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
+[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
[azure-cli-install]: /cli/azure/install-azure-cli [az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations
app-service Manage Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-automatic-scaling.md
+
+ Title: Automatic scaling
+description: Learn how to scale automatically in Azure App Service with zero configuration.
+ Last updated : 05/05/2023+++
+# Automatic scaling in Azure App Service
+
+> [!NOTE]
+> Automatic scaling is in preview. It's available for Premium Pv2 and Pv3 pricing tiers, and supported for all app types: Windows, Linux, and Windows container.
+>
+
+App Service offers automatic scaling that adjusts the number of instances based on incoming HTTP requests. Automatic scaling guarantees that your web apps can manage different levels of traffic. You can adjust scaling settings, like setting the minimum and maximum number of instances per web app, to enhance performance. The platform tackles cold start issues by prewarming instances that act as a buffer when scaling out, resulting in smooth performance transitions. Billing is calculated per second using existing meters, and prewarmed instances are also charged per second.
+
+## How automatic scaling works
+
+It's common to deploy multiple web apps to a single App Service Plan. You can enable automatic scaling for an App Service Plan and configure a range of instances for each of the web apps. As your web app starts receiving incoming HTTP traffic, App Service monitors the load and adds instances. Resources may be shared when multiple web apps within an App Service Plan are required to scale out simultaneously.
+
+Here are a few scenarios where you should scale out automatically:
+
+- You don't want to set up autoscale rules based on resource metrics.
+- You want your web apps within the same App Service Plan to scale differently and independently of each other.
+- Your web app is connected to a databases or legacy system, which may not scale as fast as the web app. Scaling automatically allows you to set the maximum number of instances your App Service Plan can scale to. This setting helps the web app to not overwhelm the backend.
+
+> [!IMPORTANT]
+> [`Always ON`](./configure-common.md?tabs=portal#configure-general-settings) needs to be disabled to use automatic scaling.
+>
+
+## Enable automatic scaling
+
+__Maximum burst__ is the highest number of instances that your App Service Plan can increase to based on incoming HTTP requests. For Premium v2 & v3 plans, you can set a maximum burst of up to 30 instances. The maximum burst must be equal to or greater than the number of workers specified for the App Service Plan.
+
+#### [Azure portal](#tab/azure-portal)
+
+To enable automatic scaling in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu. Select **Automatic (preview)**, update the __Maximum burst__ value, and select the **Save** button.
++
+#### [Azure CLI](#tab/azure-cli)
+
+The following command enables automatic scaling for your existing App Service Plan and web apps within this plan:
+
+```azurecli-interactive
+az appservice plan update --name <APP_SERVICE_PLAN> --resource-group <RESOURCE_GROUP> --elastic-scale true --max-elastic-worker-count <YOUR_MAX_BURST>
+```
+
+>[!NOTE]
+> If you receive an error message `Operation returned an invalid status 'Bad Request'`, try using a different resource group or create a new one.
+>
+
+
+
+## Set minimum number of web app instances
+
+__Always ready instances__ is an app-level setting to specify the minimum number of instances. If load exceeds what the always ready instances can handle, additional instances are added (up to the specified __maximum burst__ for the App Service Plan).
+
+#### [Azure portal](#tab/azure-portal)
+
+To set the minimum number of instances in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu, update the **Always ready instances** value, and select the **Save** button.
++
+#### [Azure CLI](#tab/azure-cli)
+```azurecli-interactive
+ az webapp update --resource-group <RESOURCE_GROUP> --name <APP_NAME> --minimum-elastic-instance-count <ALWAYS_READY_COUNT>
+```
+++
+## Set maximum number of web app instances
+
+The __maximum scale limit__ sets the maximum number of instances a web app can scale to. The maximum scale limit helps when a downstream component like a database has limited throughput. The per-app maximum can be between 1 and the __maximum burst__.
+
+#### [Azure portal](#tab/azure-portal)
+
+To set the maximum number of web app instances in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu, select **Enforce scale out limit**, update the **Maximum scale limit**, and select the **Save** button.
++
+#### [Azure CLI](#tab/azure-cli)
+
+You can't change the maximum scale limit in Azure CLI, you must instead use the Azure portal.
+++
+## Update prewarmed instances
+
+The __prewarmed instance__ setting provides warmed instances as a buffer during HTTP scale and activation events. Prewarmed instances continue to buffer until the maximum scale-out limit is reached. The default prewarmed instance count is 1 and, for most scenarios, this value should remain as 1.
+
+#### [Azure portal](#tab/azure-portal)
+
+You can't change the prewarmed instance setting in the portal, you must instead use the Azure CLI.
+
+#### [Azure CLI](#tab/azure-cli)
+
+You can modify the number of prewarmed instances for an app using the Azure CLI.
+
+```azurecli-interactive
+ az webapp update --resource-group <RESOURCE_GROUP> --name <APP_NAME> --prewarmed-instance-count <PREWARMED_COUNT>
+```
+++
+## Disable automatic scaling
+
+#### [Azure portal](#tab/azure-portal)
+
+To disable automatic scaling in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu, select **Manual**, and select the **Save** button.
++
+#### [Azure CLI](#tab/azure-cli)
+The following command disables automatic scaling for your existing App Service Plan and all web apps within this plan:
+
+```azurecli-interactive
+az appservice plan update --resource-group <RESOURCE_GROUP> --name <APP_SERVICE_PLAN> --elastic-scale false
+```
+
+
+
+## Frequently asked questions
+- [How is automatic scaling different than autoscale?](#how-is-automatic-scaling-different-than-autoscale)
+- [How does automatic scaling work with existing Auto scale rules?](#how-does-automatic-scaling-work-with-existing-autoscale-rules)
+- [Does automatic scaling support Azure Function apps?](#does-automatic-scaling-support-azure-function-apps)
+- [How to monitor the current instance count and instance history?](#how-to-monitor-the-current-instance-count-and-instance-history)
++
+### How is automatic scaling different than autoscale?
+Automatic scaling is a new scaling option in App Service that automatically handles web app scaling decisions for you. **[Azure autoscale](../azure-monitor/autoscale/autoscale-overview.md)** is a pre-existing Azure capability for defining schedule-based and resource-based scaling rules for your App Service Plans.
+
+A comparison of scale out and scale in options available on App Service:
+
+| | **Manual scaling** | **Auto scaling** | **Automatic scaling** |
+| | | | |
+| Available pricing tiers | Basic and Up | Standard and Up | Premium v2 and Premium v3 |
+|Rule-based scaling |No |Yes |No, the platform manages the scale out and in based on HTTP traffic. |
+|Schedule-based scaling |No |Yes |No|
+|Always ready instances | No, your web app runs on the number of manually scaled instances. | No, your web app runs on other instances available during the scale out operation, based on threshold defined for autoscale rules. | Yes (minimum 1) |
+|Prewarmed instances |No |No |Yes (default 1) |
+|Per-app maximum |No |No |Yes|
+
+### How does automatic scaling work with existing autoscale rules?
+Once automatic scaling is configured, existing Azure autoscale rules and schedules are disabled. Applications can use either automatic scaling, or autoscale, but not both.
+
+### Does automatic scaling support Azure Function apps?
+No, you can only have Azure App Service web apps in the App Service Plan where you wish to enable automatic scaling. If you have existing Azure Functions apps in the same App Service Plan, or if you create new Azure Functions apps, then automatic scaling is disabled. For Functions, it's recommended to use the [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md) instead.
+
+### How to monitor the current instance count and instance history?
+Use Application Insights [Live Metrics](../azure-monitor/app/live-stream.md) to check the current instance count, and [performanceCounters](../azure-functions/analyze-telemetry-data.md#query-telemetry-data) to check the instance count history.
+
+<a name="Next Steps"></a>
+
+## More resources
+
+* [Get started with autoscale in Azure](../azure-monitor/autoscale/autoscale-get-started.md)
+* [Configure PremiumV3 tier for App Service](app-service-configure-premium-tier.md)
+* [Scale up server capacity](manage-scale-up.md)
+* [High-density hosting](manage-scale-per-app.md)
app-service Manage Scale Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md
This article shows you how to scale your app in Azure App Service. There are two
[Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md). There, you find out how to use autoscaling, which is to scale instance count automatically based on predefined rules and schedules.
+>[!IMPORTANT]
+> [App Service now offers an automatic scale-out option to handle varying incoming HTTP requests.](./manage-automatic-scaling.md)
+>
+ The scale settings take only seconds to apply and affect all apps in your [App Service plan](../app-service/overview-hosting-plans.md). They don't require you to change your code or redeploy your application.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
This section shows the configurable runtime settings for each supported language
| `AZURE_TOMCAT90_HOME` | Read-only. For native Windows apps, path to the Tomcat 9 installation. | | | `AZURE_SITE_HOME` | The value added to the Java args as `-Dsite.home`. The default is the value of `HOME`. | | | `HTTP_PLATFORM_PORT` | Added to Java args as `-Dport.http`. The following environment variables used by different Java web frameworks are also set to this value: `SERVER_PORT`, `MICRONAUT_SERVER_PORT`, `THORNTAIL_HTTP_PORT`, `RATPACK_PORT`, `QUARKUS_HTTP_PORT`, `PAYARAMICRO_PORT`. ||
-| `AZURE_LOGGING_DIR` | Native Windows apps only. Added to Java args as `-Dsite.logdir`. The default is `%HOME%\LogFiles\`. ||
+| `AZURE_LOGGING_DIR` | For Windows Apps, added to Java args as `-Dsite.logdir`. The default is `%HOME%\LogFiles\`. Default value in Linux is `AZURE_LOGGING_DIR=/home/LogFiles`. ||
<!-- WEBSITE_JAVA_COPY_ALL
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
The following are the fields extracted per document type. The Azure Form Recogni
>[!NOTE] >
-> In addition to specifying the IdDocument model, you can designate the ID type for (driver license, passport, national identity card, residence permit, or US social security card ).
+> In addition to specifying the IdDocument model, you can designate the ID type for (driver license, passport, national/regional identity card, residence permit, or US social security card ).
### Data extraction (all types)
The following are the fields extracted per document type. The Azure Form Recogni
|:|:--|:|:--| |`CountryRegion`|`countryRegion`|Country or region code|USA| |`Region`|`string`|State or province|Washington|
-|`DocumentNumber`|`string`|National identity card number|WDLABCD456DG|
-|`DocumentDiscriminator`|`string`|National identity card document discriminator|12645646464554646456464544|
+|`DocumentNumber`|`string`|National/regional identity card number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|National/regional identity card document discriminator|12645646464554646456464544|
|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.| |`LastName`|`string`|Surname|TALBOT| |`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* New model additions in gated preview: **Vaccination cards**, **Contracts**, **US Tax 1098**, **US Tax 1098-E**, and **US Tax 1098-T**. To request access to gated preview models, complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey). * [**Receipt model updates**](concept-receipt.md) * Receipt model has added support for thermal receipts.
- * Receipt model now has added language support for 18 languages and three language dialects (English, French, Portuguese).
+ * Receipt model now has added language support for 18 languages and three regional languages (English, French, Portuguese).
* Receipt model now supports `TaxDetails` extraction. * [**Layout model**](concept-layout.md) now has improved table recognition. * [**Read model**](concept-read.md) now has added improvement for single-digit character recognition.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* India ID cards and documents (PAN and Aadhaar) * Australia ID cards and documents (photo card, Key-pass ID) * Canada ID cards and documents (identification card, Maple card)
- * United Kingdom ID cards and documents (national identity card)
+ * United Kingdom ID cards and documents (national/regional identity card)
automation Manage Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-run-as-account.md
Title: Manage an Azure Automation Run As account description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal. Previously updated : 04/12/2023 Last updated : 05/05/2023
To learn more about Azure Automation account authentication, permissions require
## <a name="cert-renewal"></a>Renew a self-signed certificate
-The self-signed certificate that you have created for the Run As account expires one year from the date of creation. At some point before your Run As account expires, you must renew the certificate. You can renew it any time before it expires.
+The self-signed certificate that you have created for the Run As account expires one month from the date of creation. At some point before your Run As account expires, you must renew the certificate. You can renew it any time before it expires.
When you renew the self-signed certificate, the current valid certificate is retained to ensure that any runbooks that are queued up or actively running, and that authenticate with the Run As account, aren't negatively affected. The certificate remains valid until its expiration date.
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/faq.md
Title: "Azure Arc-enabled Kubernetes and GitOps frequently asked questions" Previously updated : 08/22/2022 Last updated : 05/04/2023 description: "This article contains a list of frequently asked questions related to Azure Arc-enabled Kubernetes and Azure GitOps."
Azure Arc-enabled Kubernetes allows you to extend AzureΓÇÖs management capabilit
## Do I need to connect my AKS clusters running on Azure to Azure Arc?
-Connecting an Azure Kubernetes Service (AKS) cluster to Azure Arc is only required for running Azure Arc-enabled services like App Services and Data Services on top of the cluster. This can be done using the [custom locations](custom-locations.md) feature of Azure Arc-enabled Kubernetes. This is a point in time limitation for now till cluster extensions and custom locations are introduced natively on top of AKS clusters.
-
-If you don't want to use custom locations and just want to use management features like Azure Monitor and Azure Policy (Gatekeeper), they are available natively on AKS and connection to Azure Arc is not required in such cases.
+Currently, connecting an Azure Kubernetes Service (AKS) cluster to Azure Arc is not required for most scenarios. You may want to connect a cluster to run certain Azure Arc-enabled services such as App Services and Data Services on top of the cluster. This can be done using the [custom locations](custom-locations.md) feature of Azure Arc-enabled Kubernetes.
## Should I connect my AKS-HCI cluster and Kubernetes clusters on Azure Stack Edge to Azure Arc?
-Yes, connecting your AKS-HCI cluster or Kubernetes clusters on Azure Stack Edge to Azure Arc provides clusters with resource representation in Azure Resource Manager. This resource representation extends capabilities like Cluster Configuration, Azure Monitor, and Azure Policy (Gatekeeper) to connected Kubernetes clusters.
+Connecting your AKS-HCI cluster or Kubernetes clusters on Azure Stack Edge to Azure Arc provides clusters with resource representation in Azure Resource Manager. This resource representation extends capabilities like Cluster Configuration, Azure Monitor, and Azure Policy (Gatekeeper) to connected Kubernetes clusters.
If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, AKS on Azure Stack HCI (>= April 2021 update), or AKS on Windows Server 2019 Datacenter (>= April 2021 update), then the Kubernetes configuration is included at no charge.
If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp
## If I am already using CI/CD pipelines, can I still use Azure Arc-enabled Kubernetes or AKS and GitOps configurations?
-Yes, you can still use configurations on a cluster receiving deployments via a CI/CD pipeline. Compared to traditional CI/CD pipelines, GitOps configurations feature some extra benefits:
+Yes, you can still use configurations on a cluster receiving deployments via a CI/CD pipeline. Compared to traditional CI/CD pipelines, GitOps configurations feature some extra benefits.
### Drift reconciliation
The CI/CD pipeline applies changes only once during pipeline run. However, the G
### Apply GitOps at scale
-CI/CD pipelines are useful for event-driven deployments to your Kubernetes cluster (for example, a push to a Git repository). However, if you want to deploy the same configuration to all of your Kubernetes clusters, you would need to manually configure each Kubernetes cluster's credentials to the CI/CD pipeline.
+CI/CD pipelines are useful for event-driven deployments to your Kubernetes cluster (for example, a push to a Git repository). However, to deploy the same configuration to all of your Kubernetes clusters, you need to manually configure each Kubernetes cluster's credentials to the CI/CD pipeline.
-For Azure Arc-enabled Kubernetes, since Azure Resource Manager manages your GitOps configurations, you can automate creating the same configuration across all Azure Arc-enabled Kubernetes and AKS resources using Azure Policy, within scope of a subscription or a resource group. This capability is even applicable to Azure Arc-enabled Kubernetes and AKS resources created after the policy assignment.
+For Azure Arc-enabled Kubernetes, since Azure Resource Manager manages your GitOps configurations, you can automate creating the same configuration across all Azure Arc-enabled Kubernetes and AKS resources using Azure Policy, within the scope of a subscription or a resource group. This capability is even applicable to Azure Arc-enabled Kubernetes and AKS resources created after the policy assignment.
This feature applies baseline configurations (like network policies, role bindings, and pod security policies) across the entire Kubernetes cluster inventory to meet compliance and governance requirements.
The feature to enable storing customer data in a single region is currently only
* Already have an AKS cluster or an Azure Arc-enabled Kubernetes cluster? [Create GitOps configurations on your Azure Arc-enabled Kubernetes cluster](./tutorial-use-gitops-flux2.md). * Learn how to [setup a CI/CD pipeline with GitOps](./tutorial-gitops-flux2-ci-cd.md). * Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
+* Experience Azure Arc-enabled Kubernetes automated scenarios with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_k8s/).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (C
## Next steps
-* Explore the [Cloud Adoption Framework for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-kubernetes/eslz-arc-kubernetes-identity-access-management)
-* [Connect an existing Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md)
+* Learn about best practices and design patterns through the [Cloud Adoption Framework for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-kubernetes/eslz-arc-kubernetes-identity-access-management).
+* Try out Arc-enabled Kubernetes without provisioning a full environment by using the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_k8s/).
+* [Connect an existing Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md).
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
ms.devlang: azurecli
Get started with Azure Arc-enabled Kubernetes by using Azure CLI or Azure PowerShell to connect an existing Kubernetes cluster to Azure Arc.
-For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enabled Kubernetes agent overview](./conceptual-agent-overview.md).
+For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enabled Kubernetes agent overview](./conceptual-agent-overview.md). To try things out in a sample/practice experience, visit the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_k8s/).
## Prerequisites
Remove-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName Azure
* Learn how to [deploy configurations using GitOps with Flux v2](tutorial-use-gitops-flux2.md). * [Troubleshoot common Azure Arc-enabled Kubernetes issues](troubleshooting.md).
+* Experience Azure Arc-enabled Kubernetes automated scenarios with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_k8s/).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines). * Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md). * Experience Azure Arc by exploring the [Azure Arc Jumpstart](https://aka.ms/AzureArcJumpstart).
-* Learn about best practices and design patterns trough the various [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady).
+* Learn about best practices and design patterns through the [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady).
* Understand [network requirements for Azure Arc](network-requirements-consolidated.md).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
To resolve the error, one or more network misconfigurations may need to be addre
Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files.
-## Azure-Arc enabled VMs on Azure Stack HCI issues
+## Azure Arc-enabled VMs on Azure Stack HCI issues
-For general help resolving issues related to Azure-Arc enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms).
+For general help resolving issues related to Azure Arc-enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms).
### Authentication handshake failure
azure-arc Deployment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md
Title: Azure Connected Machine agent deployment options description: Learn about the different options to onboard machines to Azure Arc-enabled servers. Previously updated : 10/08/2022 Last updated : 05/04/2023
Be sure to review the basic [prerequisites](prerequisites.md) and [network confi
* Learn about the Azure Connected Machine agent [prerequisites](prerequisites.md) and [network requirements](network-requirements.md). * Review the [Planning and deployment guide for Azure Arc-enabled servers](plan-at-scale-deployment.md) * Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md).
+* Try out Arc-enabled servers by using the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_servers/).
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. Previously updated : 06/06/2022 Last updated : 05/04/2023
Get started with [Azure Arc-enabled servers](../overview.md) to manage and gover
In this quickstart, you'll deploy and configure the Azure Connected Machine agent on a Windows or Linux machine hosted outside of Azure, so that it can be managed through Azure Arc-enabled servers.
+> [!TIP]
+> If you prefer to try out things in a sample/practice experience, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_servers/).
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Use the Azure portal to create a script that automates the agent download and in
:::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server-expanded.png"::: > [!NOTE]
- > In the portal, you can also reach the page for adding servers by searching for and selecting "Servers - Azure Arc" and then selecting **+Add**.
+ > In the portal, you can also reach this page by searching for and selecting "Servers - Azure Arc" and then selecting **+Add**.
1. Review the information on the **Prerequisites** page, then select **Next**.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
This parameter specifies a resource in Azure Resource Manager to delete from Azu
> [!NOTE] > If you have deployed one or more Azure VM extensions to your Azure Arc-enabled server and you delete its registration in Azure, the extensions remain installed and may continue performing their functions. Any machine intended to be retired or no longer managed by Azure Arc-enabled servers should first have its [extensions removed](#step-1-remove-vm-extensions) before removing its registration from Azure.
-To disconnect using a service principal, run the command below. Be sure to specify a service principal that has the required roles for disconnecting servers; this will not be the same service principal that was used to onboard the server:
+To disconnect using a service principal, run the command below. Be sure to specify a service principal that has the required roles for disconnecting servers, i.e. the Azure Connected Machine Resource Administrator role. This will not be the same service principal that was used to onboard the server:
`azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword>`
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Before you start connecting your machines, review the following requirements:
1. Make sure you have administrator permission on the machines you want to onboard. Administrator permissions are required to install the Connected Machine agent on the machines; on Linux by using the root account, and on Windows as a member of the Local Administrators group.
-1. Review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. You will need to have the **Azure Connected Machine Onboarding** role or the **Contributor** role for the resource group of the machine.
+1. Review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. You will need to have the **Azure Connected Machine Onboarding** role or the **Contributor** role for the resource group of the machine. Make sure to register the below Azure resource providers beforehand in your target subscription.
+
+ * Microsoft.HybridCompute
+ * Microsoft.GuestConfiguration
+ * Microsoft.HybridConnectivity
+ * Microsoft.AzureArcData (if you plan to Arc-enable SQL Servers)
+
+ See detailed how to here: [Azure resource providers prerequisites](prerequisites.md#azure-resource-providers)
For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
After you install the agent and configure it to connect to Azure Arc-enabled ser
- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Learn how to [troubleshoot agent connection issues](troubleshoot-agent-onboard.md). - Learn how to manage your machines using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that machines are reporting to the expected Log Analytics workspace, monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and more.
-```
-
-```
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
Azure Arc-enabled servers stores customer data. By default, customer data stays
## Next steps * Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review the [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
+* Try out Arc-enabled servers by using the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_servers/).
* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+* Explore the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management).
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: Plan and deploy Azure Arc-enabled servers description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 04/27/2022 Last updated : 05/04/2023
Phase 3 is when administrators or system engineers can enable automation of manu
## Next steps
+* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management).
* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md). * Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md). * Learn how to simplify deployment with other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and other supported [Azure VM extensions](manage-vm-extensions.md).
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Once your Azure Arc Private Link Scope is created, you need to connect it with o
1. On the **Configuration** page,
- a. Choose the **virtual network** and **subnet** that you want to connect to your Azure-Arc enabled server.
+ a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server.
b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below.
azure-arc Ssh Arc Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md
Title: Troubleshoot SSH access to Azure Arc-enabled servers issues
-description: This article tells how to troubleshoot and resolve issues with the SSH access to Arc-enabled servers.
Previously updated : 03/21/2022
+description: Learn how to troubleshoot and resolve issues with SSH access to Arc-enabled servers.
Last updated : 05/04/2023
-# Troubleshoot SSH access to Azure Arc enabled servers
+# Troubleshoot SSH access to Azure Arc-enabled servers
-This article provides information on troubleshooting and resolving issues that may occur while attempting to connect to Azure Arc enabled servers via SSH.
-For general information, see [SSH access to Arc enabled servers overview](./ssh-arc-overview.md).
+This article provides information on troubleshooting and resolving issues that may occur while attempting to connect to Azure Arc-enabled servers via SSH.
+For general information, see [SSH access to Arc-enabled servers overview](./ssh-arc-overview.md).
> [!IMPORTANT] > SSH for Arc-enabled servers is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Client-side issues+ These issues are due to errors that occur on the machine that the user is connecting from. ### Incorrect Azure subscription
-This occurs when the active subscription for Azure CLI isn't the same as the server that is being connected to.
-Possible errors:
+
+This problem occurs when the active subscription for Azure CLI isn't the same as the server that is being connected to. Possible errors:
+
+- `Unable to determine the target machine type as Azure VM or Arc Server`
+- `Unable to determine that the target machine is an Arc Server`
+- `Unable to determine that the target machine is an Azure VM`
+- `The resource \<name\> in the resource group \<resource group\> was not found`
Resolution:+
+- Run ```az account set -s <AzureSubscriptionId>``` where `AzureSubscriptionId` corresponds to the subscription that contains the target resource.
### Unable to locate client binaries
-This issue occurs when the client side SSH binaries required to connect cannot be found.
-Error:
+
+This issue occurs when the client side SSH binaries required to connect aren't found. Possible errors:
+
+- `Failed to create ssh key file with error: \<ERROR\>.`
+- `Failed to run ssh command with error: \<ERROR\>.`
+- `Failed to get certificate info with error: \<ERROR\>.`
+- `Failed to create ssh key file with error: [WinError 2] The system cannot find the file specified.`
+- `Failed to create ssh key file with error: [Errno 2] No such file or directory: 'ssh-keygen'.`
Resolution:+
+- Provide the path to the folder that contains the SSH client executables by using the ```--ssh-client-folder``` parameter.
## Server-side issues
-### SSH traffic is not allowed on the server
-This issue occurs when SSHD isn't running on the server, or SSH traffic isn't allowed on the server.
-Possible errors:
+
+### SSH traffic not allowed on the server
+
+This issue occurs when SSHD isn't running on the server, or SSH traffic isn't allowed on the server. Error:
+
+- `{"level":"fatal","msg":"sshproxy: error copying information from the connection: read tcp 192.168.1.180:60887-\u003e40.122.115.96:443: wsarecv: An existing connection was forcibly closed by the remote host.","time":"2022-02-24T13:50:40-05:00"}`
Resolution:+
+- Ensure that the SSHD service is running on the Arc-enabled server.
+- Ensure that port 22 (or other nondefault port) is listed in allowed incoming connections. Run `azcmagent config list` on the Arc-enabled server in an elevated session. The ssh port (22) isn't set by default, so you must add it. This setting is used by other services, like admin center, so just add port 22 without deleting previously added ports.
```powershell # Set 22 port:
Resolution:
# Add multiple ports: azcmagent config set incomingconnections.ports 22,6516 ```
-
+ ## Azure permissions issues ### Incorrect role assignments
-This issue occurs when the current user does not have the proper role assignment on the target resource, specifically a lack of "read" permissions.
-Possible errors:
+
+This issue occurs when the current user doesn't have the proper role assignment on the target resource, specifically a lack of `read` permissions. Possible errors:
+
+- `Unable to determine the target machine type as Azure VM or Arc Server`
+- `Unable to determine that the target machine is an Arc Server`
+- `Unable to determine that the target machine is an Azure VM`
+- `Permission denied (publickey).`
+- `Request for Azure Relay Information Failed: (AuthorizationFailed) The client '\<user name\>' with object id '\<ID\>' does not have authorization to perform action 'Microsoft.HybridConnectivity/endpoints/listCredentials/action' over scope '/subscriptions/\<Subscription ID\>/resourceGroups/\<Resource Group\>/providers/Microsoft.HybridCompute/machines/\<Machine Name\>/providers/Microsoft.HybridConnectivity/endpoints/default' or the scope is invalid. If access was recently granted, please refresh your credentials.`
Resolution:
-### HybridConnectiviry RP was not registered
-This issue occurs when the HybridConnectivity RP has not been registered for the subscription.
-Error:
+- Ensure that you have Contributor or Owner permissions on the resource you're connecting to.
+- If using Azure AD login, ensure you have the Virtual Machine User Login or the Virtual Machine Administrator Login roles
+
+### HybridConnectivity RP not registered
+
+This issue occurs when the HybridConnectivity resource provider isn't registered for the subscription. Error:
+
+- Request for Azure Relay Information Failed: (NoRegisteredProviderFound) Code: NoRegisteredProviderFound
Resolution:-
- ## Disable SSH to Arc-enabled servers
- This functionality can be disabled by completing the following actions:
- - Remove the SSH port from the allowedincoming ports: ```azcmagent config set incomingconnections.ports <other open ports,...>```
- - Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
+
+- Run ```az provider register -n Microsoft.HybridConnectivity```
+- Confirm success by running ```az provider show -n Microsoft.HybridConnectivity```, verify that `registrationState` is set to `Registered`
+- Restart the hybrid agent on the Arc-enabled server
+
+## Disable SSH to Arc-enabled servers
+
+To disable the functionality, complete the following actions:
+
+- Remove the SSH port from the allowed incoming ports: ```azcmagent config set incomingconnections.ports <other open ports,...>```
+- Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<Arc-enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
+
+## Next steps
+
+- Learn about SSH access to [Azure Arc-enabled servers](ssh-arc-overview.md).
+- Learn about troubleshooting [agent connection issues](troubleshoot-agent-onboard.md).
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
In this section, you will enable resource pools, networks, and other non-VM reso
1. (Optional) Select **Install guest agent** and then provide the Administrator username and password of the guest operating system.
- The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc enabled VMware vSphere](manage-vmware-vms-in-azure.md).
+ The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc-enabled VMware vSphere](manage-vmware-vms-in-azure.md).
1. Select **Enable** to start the deployment of the VM represented in Azure.
azure-arc Manage Access To Arc Vmware Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md
Last updated 11/08/2021
# Manage access to VMware resources through Azure Role-Based Access Control
-Once your VMware vCenter resources have been enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure and allow your teams to deploy and manage VMs.
+Once your VMware vCenter resources have been enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure and allow your teams to deploy and manage VMs.
## Arc-enabled VMware vSphere built-in roles There are three built-in roles to meet your access control requirements. You can apply these roles to a whole subscription, resource group, or a single resource. -- **Azure Arc VMware Administrator** role - is used by administrators
+- **Azure Arc VMware Administrator** role - used by administrators
-- **Azure Arc VMware Private Cloud User** role - is used by anyone who needs to deploy and manage VMs
+- **Azure Arc VMware Private Cloud User** role - used by anyone who needs to deploy and manage VMs
-- **Azure Arc VMware VM Contributor** role - is used by anyone who needs to deploy and manage VMs
+- **Azure Arc VMware VM Contributor** role - used by anyone who needs to deploy and manage VMs
### Azure Arc VMware Administrator role
-The **Azure Arc VMware Administrator** role is a built-in role that provides permissions to perform all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. Assign this role to users or groups that are administrators managing Azure Arc enabled VMware vSphere deployment.
+The **Azure Arc VMware Administrator** role is a built-in role that provides permissions to perform all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. Assign this role to users or groups that are administrators managing Azure Arc-enabled VMware vSphere deployment.
### Azure Arc VMware Private Cloud User role The **Azure Arc VMware Private Cloud User** role is a built-in role that provides permissions to use the VMware vSphere resources made accessible through Azure. Assign this role to any users or groups that need to deploy, update, or delete VMs.
-We recommend assigning this role at the individual resource pool (or host or cluster), virtual network, or template that you want the user to deploy VMs using.
+We recommend assigning this role at the individual resource pool (or host or cluster), virtual network, or template with which you want the user to deploy VMs.
### Azure Arc VMware VM Contributor The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations. Assign this role to any users or groups that need to deploy, update, or delete VMs.
-We recommend assigning this role at the subscription or resource group you want the user to deploy VMs using:
+We recommend assigning this role for the subscription or resource group to which you want the user to deploy VMs.
## Assigning the roles to users/groups
We recommend assigning this role at the subscription or resource group you want
## Next steps
-[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md)
+- [Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the
## Next steps -- [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)--- [Support matrix for Arc enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md)
+- [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).
+- View the [support matrix for Arc-enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md).
+- Try out Arc-enabled VMware vSphere by using the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_vsphere/).
azure-functions Durable Functions Node Model Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-node-model-upgrade.md
+
+ Title: Upgrade your Durable Functions app to version 4 of the Node.js programming model
+description: This article shows you how to upgrade your existing Durable Functions apps running on v3 of the Node.js programming model to v4.
++ Last updated : 04/06/2023
+ms.devlang: javascript, typescript
++
+zone_pivot_groups: programming-languages-set-functions-nodejs
++
+# Upgrade your Durable Functions app to version 4 of the Node.js programming model
+
+>[!NOTE]
+> Version 4 of the Node.js programming model is currently in public preview. Learn more by visiting the Node [Functions developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
+
+This article provides a guide to upgrade your existing Durable Functions app to version 4 of the Node.js programming model. Note that this article uses "TIP" banners to summarize the key steps needed to upgrade your app.
+
+If you're interested in creating a brand new v4 app instead, you can follow the Visual Studio Code quickstarts for [JavaScript](./quickstart-js-vscode.md?pivots=nodejs-model-v4) and [TypeScript](./quickstart-ts-vscode.md?pivots=nodejs-model-v4).
+
+>[!TIP]
+> Before following this guide, make sure you follow the general [version 4 upgrade guide](../functions-node-upgrade-v4.md).
+
+## Prerequisites
+
+Before following this guide, make sure you follow these steps first:
+
+- Install [Node.js](https://nodejs.org/en/download/releases) version 18.x+.
+- Install [TypeScript](https://www.typescriptlang.org/) version 4.x+.
+- Run your app on [Azure Functions Runtime](../functions-versions.md?tabs=v4&pivots=programming-language-javascript) version 4.16.5+.
+- Install [Azure Functions Core Tools](../functions-run-local.md?tabs=v4) version 4.0.5095+.
+- Review the general [Azure Functions Node.js programming model v4 upgrade guide](../functions-node-upgrade-v4.md).
+
+## Upgrade the `durable-functions` npm package
+
+>[!NOTE]
+>The programming model version should not be confused with the `durable-functions` package version. `durable-functions` package version 3.x is required for the v4 programming model, while `durable-functions` version 2.x is required for the v3 programming model.
+
+The v4 programming model is supported by the v3.x of the `durable-functions` npm package. In your programming model v3 app, you likely had `durable-functions` v2.x listed in your dependencies. Make sure to update to the (currently in preview) v3.x of the `durable-functions` package.
+
+>[!TIP]
+> Upgrade to the preview v3.x of the `durable-functions` npm package. You can do this with the following command:
+>
+> ```bash
+> npm install durable-functions@preview
+> ```
+
+## Register your Durable Functions Triggers
+
+In the v4 programming model, declaring triggers and bindings in a separate `function.json` file is a thing of the past! Now you can register your Durable Functions triggers and bindings directly in code, using the new APIs found in the `app` namespace on the root of the `durable-functions` package. See the code snippets below for examples.
+
+**Migrating an orchestration**
+
+# [v4 model](#tab/v4)
+
+```javascript
+const df = require('durable-functions');
+
+const activityName = 'helloActivity';
+
+df.app.orchestration('durableOrchestrator', function* (context) {
+ const outputs = [];
+ outputs.push(yield context.df.callActivity(activityName, 'Tokyo'));
+ outputs.push(yield context.df.callActivity(activityName, 'Seattle'));
+ outputs.push(yield context.df.callActivity(activityName, 'Cairo'));
+
+ return outputs;
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+const df = require("durable-functions");
+
+const activityName = "hello"
+
+module.exports = df.orchestrator(function* (context) {
+ const outputs = [];
+ outputs.push(yield context.df.callActivity(activityName, "Tokyo"));
+ outputs.push(yield context.df.callActivity(activityName, "Seattle"));
+ outputs.push(yield context.df.callActivity(activityName, "London"));
+
+ return outputs;
+});
+```
+
+```json
+{
+ "bindings": [
+ {
+ "name": "context",
+ "type": "orchestrationTrigger",
+ "direction": "in"
+ }
+ ]
+}
+```
+++
+# [v4 model](#tab/v4)
+
+```typescript
+import * as df from 'durable-functions';
+import { OrchestrationContext, OrchestrationHandler } from 'durable-functions';
+
+const activityName = 'hello';
+
+const durableHello1Orchestrator: OrchestrationHandler = function* (context: OrchestrationContext) {
+ const outputs = [];
+ outputs.push(yield context.df.callActivity(activityName, 'Tokyo'));
+ outputs.push(yield context.df.callActivity(activityName, 'Seattle'));
+ outputs.push(yield context.df.callActivity(activityName, 'Cairo'));
+
+ return outputs;
+};
+df.app.orchestration('durableOrchestrator', durableHello1Orchestrator);
+```
+
+# [v3 model](#tab/v3)
+
+```typescript
+import * as df from "durable-functions"
+
+const activityName = "hello"
+
+const orchestrator = df.orchestrator(function* (context) {
+ const outputs = [];
+ outputs.push(yield context.df.callActivity(activityName, "Tokyo"));
+ outputs.push(yield context.df.callActivity(activityName, "Seattle"));
+ outputs.push(yield context.df.callActivity(activityName, "London"));
+
+ return outputs;
+});
+
+export default orchestrator;
+```
+
+```json
+{
+ "bindings": [
+ {
+ "name": "context",
+ "type": "orchestrationTrigger",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "../dist/durableOrchestrator/index.js"
+}
+```
++++
+**Migrating an entity**
++
+# [v4 model](#tab/v4)
+
+```javascript
+const df = require('durable-functions');
+
+df.app.entity('Counter', (context) => {
+ const currentValue = context.df.getState(() => 0);
+ switch (context.df.operationName) {
+ case 'add':
+ const amount = context.df.getInput();
+ context.df.setState(currentValue + amount);
+ break;
+ case 'reset':
+ context.df.setState(0);
+ break;
+ case 'get':
+ context.df.return(currentValue);
+ break;
+ }
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+const df = require("durable-functions");
+
+module.exports = df.entity(function (context) {
+ const currentValue = context.df.getState(() => 0);
+ switch (context.df.operationName) {
+ case "add":
+ const amount = context.df.getInput();
+ context.df.setState(currentValue + amount);
+ break;
+ case "reset":
+ context.df.setState(0);
+ break;
+ case "get":
+ context.df.return(currentValue);
+ break;
+ }
+});
+```
+
+```json
+{
+ "bindings": [
+ {
+ "name": "context",
+ "type": "entityTrigger",
+ "direction": "in"
+ }
+ ]
+}
+```
++++
+# [v4 model](#tab/v4)
+
+```typescript
+import * as df from 'durable-functions';
+import { EntityContext, EntityHandler } from 'durable-functions';
+
+const counterEntity: EntityHandler<number> = (context: EntityContext<number>) => {
+ const currentValue: number = context.df.getState(() => 0);
+ switch (context.df.operationName) {
+ case 'add':
+ const amount: number = context.df.getInput();
+ context.df.setState(currentValue + amount);
+ break;
+ case 'reset':
+ context.df.setState(0);
+ break;
+ case 'get':
+ context.df.return(currentValue);
+ break;
+ }
+};
+df.app.entity('Counter', counterEntity);
+```
+
+# [v3 model](#tab/v3)
+
+```typescript
+import * as df from "durable-functions"
+
+const entity = df.entity(function (context) {
+ const currentValue = context.df.getState(() => 0) as number;
+ switch (context.df.operationName) {
+ case "add":
+ const amount = context.df.getInput() as number;
+ context.df.setState(currentValue + amount);
+ break;
+ case "reset":
+ context.df.setState(0);
+ break;
+ case "get":
+ context.df.return(currentValue);
+ break;
+ }
+});
+
+export default entity;
+```
+
+```json
+{
+ "bindings": [
+ {
+ "name": "context",
+ "type": "entityTrigger",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "../dist/Counter/index.js"
+}
+```
+
+-
+
+**Migrating an activity**
++
+# [v4 model](#tab/v4)
+
+```javascript
+const df = require('durable-functions');
+
+df.app.activity('hello', {
+ handler: (input) => {
+ return `Hello, ${input}`;
+ },
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+module.exports = async function (context) {
+ return `Hello, ${context.bindings.name}!`;
+};
+```
+
+```json
+{
+ "bindings": [
+ {
+ "name": "name",
+ "type": "activityTrigger",
+ "direction": "in"
+ }
+ ]
+}
+```
+++
+# [v4 model](#tab/v4)
+
+```typescript
+import * as df from 'durable-functions';
+import { ActivityHandler } from "durable-functions";
+
+const helloActivity: ActivityHandler = (input: string): string => {
+ return `Hello, ${input}`;
+};
+
+df.app.activity('hello', { handler: helloActivity });
+```
+
+# [v3 model](#tab/v3)
+
+```typescript
+import { AzureFunction, Context } from "@azure/functions"
+
+const helloActivity: AzureFunction = async function (context: Context): Promise<string> {
+ return `Hello, ${context.bindings.name}!`;
+};
+
+export default helloActivity;
+```
+
+```json
+{
+ "bindings": [
+ {
+ "name": "name",
+ "type": "activityTrigger",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "../dist/hello/index.js"
+}
+```
+++
+>[!TIP]
+> Remove `function.json` files from your Durable Functions app. Instead, register your durable functions using the methods on the `app` namespace: `df.app.orchestration()`, `df.app.entity()`, and `df.app.activity()`.
++
+## Register your Durable Client input binding
+
+In the v4 model, registering secondary input bindings, like durable clients, is also done in code! Use the `input.durableClient()` method to register a durable client input _binding_ to a function of your choice. In the function body, use `getClient()` to retrieve the client instance, as before. The example below shows an example using an HTTP triggered function.
++
+# [v4 model](#tab/v4)
+
+```javascript
+const { app } = require('@azure/functions');
+const df = require('durable-functions');
+
+app.http('durableHttpStart', {
+ route: 'orchestrators/{orchestratorName}',
+ extraInputs: [df.input.durableClient()],
+ handler: async (_request, context) => {
+ const client = df.getClient(context);
+ // Use client in function body
+ },
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+const df = require("durable-functions");
+
+module.exports = async function (context, req) {
+ const client = df.getClient(context);
+ // Use client in function body
+};
+```
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "route": "orchestrators/{functionName}",
+ "methods": [
+ "post",
+ "get"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "starter",
+ "type": "durableClient",
+ "direction": "in"
+ }
+ ]
+}
+```
++++
+# [v4 model](#tab/v4)
+
+```typescript
+import { app, HttpHandler, HttpRequest, HttpResponse, InvocationContext } from '@azure/functions';
+import * as df from 'durable-functions';
+
+const durableHttpStart: HttpHandler = async (request: HttpRequest, context: InvocationContext): Promise<HttpResponse> => {
+ const client = df.getClient(context);
+ // Use client in function body
+};
+
+app.http('durableHttpStart', {
+ route: 'orchestrators/{orchestratorName}',
+ extraInputs: [df.input.durableClient()],
+ handler: durableHttpStart,
+});
+```
+
+# [v3 model](#tab/v3)
+
+```typescript
+import * as df from "durable-functions"
+import { AzureFunction, Context, HttpRequest } from "@azure/functions"
+
+const durableHttpStart: AzureFunction = async function (context: Context): Promise<any> {
+ const client = df.getClient(context);
+ // Use client in function body
+};
+
+export default durableHttpStart;
+```
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "route": "orchestrators/{functionName}",
+ "methods": [
+ "post",
+ "get"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "starter",
+ "type": "durableClient",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "../dist/durableHttpStart/index.js"
+}
+```
+++
+>[!TIP]
+> Use the `input.durableClient()` method to register a durable client extra input to your client function. Use `getClient()` as normal to retrieve a `DurableClient` instance.
+
+## Update your Durable Client API calls
+
+In `v3.x` of `durable-functions`, multiple APIs on the `DurableClient` class (renamed from `DurableOrchestrationClient`) have been simplified to make calling them easier and more streamlined. For many optional arguments to APIs, you now pass one options object, instead of multiple discrete optional arguments. Below is an example of these changes:
++
+# [v4 model](#tab/v4)
+
+```javascript
+const client = df.getClient(context)
+const status = await client.getStatus('instanceId', {
+ showHistory: false,
+ showHistoryOutput: false,
+ showInput: true
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+const client = df.getClient(context);
+const status = await client.getStatus('instanceId', false, false, true);
+```
++++
+# [v4 model](#tab/v4)
+
+```typescript
+const client: DurableClient = df.getClient(context);
+const status: DurableOrchestrationStatus = await client.getStatus('instanceId', {
+ showHistory: false,
+ showHistoryOutput: false,
+ showInput: true
+});
+```
+
+# [v3 model](#tab/v3)
+
+```typescript
+const client: DurableOrchestrationClient = df.getClient(context);
+const status: DurableOrchestrationStatus = await client.getStatus('instanceId', false, false, true);
+```
+++
+Below, find the full list of changes:
+
+<table>
+<tr>
+<th> V3 model (durable-functions v2.x) </th>
+<th> V4 model (durable-functions v3.x) </th>
+</tr>
+<tr>
+<td>
+
+```typescript
+getStatus(
+ instanceId: string,
+ showHistory?: boolean,
+ showHistoryOutput?: boolean,
+ showInput?: boolean
+): Promise<DurableOrchestrationStatus>
+```
+</td>
+<td>
+
+```typescript
+getStatus(
+ instanceId: string,
+ options?: GetStatusOptions
+): Promise<DurableOrchestrationStatus>
+```
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+getStatusBy(
+ createdTimeFrom: Date | undefined,
+ createdTimeTo: Date | undefined,
+ runtimeStatus: OrchestrationRuntimeStatus[]
+): Promise<DurableOrchestrationStatus[]>
+```
+
+</td>
+<td>
+
+```typescript
+getStatusBy(
+ options: OrchestrationFilter
+): Promise<DurableOrchestrationStatus[]>
+```
+
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+purgeInstanceHistoryBy(
+ createdTimeFrom: Date,
+ createdTimeTo?: Date,
+ runtimeStatus?: OrchestrationRuntimeStatus[]
+): Promise<PurgeHistoryResult>
+```
+
+</td>
+<td>
+
+```typescript
+purgeInstanceHistoryBy(
+ options: OrchestrationFilter
+): Promise<PurgeHistoryResult>
+```
+
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+raiseEvent(
+ instanceId: string,
+ eventName: string,
+ eventData: unknown,
+ taskHubName?: string,
+ connectionName?: string
+): Promise<void>
+```
+
+</td>
+<td>
+
+```typescript
+raiseEvent(
+ instanceId: string,
+ eventName: string,
+ eventData: unknown,
+ options?: TaskHubOptions
+): Promise<void>
+```
+
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+readEntityState<T>(
+ entityId: EntityId,
+ taskHubName?: string,
+ connectionName?: string
+): Promise<EntityStateResponse<T>>
+```
+
+</td>
+<td>
+
+```typescript
+readEntityState<T>(
+ entityId: EntityId,
+ options?: TaskHubOptions
+): Promise<EntityStateResponse<T>>
+```
+
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+rewind(
+ instanceId: string,
+ reason: string,
+ taskHubName?: string,
+ connectionName?: string
+): Promise<void>`
+```
+
+</td>
+<td>
+
+```typescript
+rewind(
+ instanceId: string,
+ reason: string,
+ options?: TaskHubOptions
+): Promise<void>
+```
+
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+signalEntity(
+ entityId: EntityId,
+ operationName?: string,
+ operationContent?: unknown,
+ taskHubName?: string,
+ connectionName?: string
+): Promise<void>
+```
+</td>
+<td>
+
+```typescript
+signalEntity(
+ entityId: EntityId,
+ operationName?: string,
+ operationContent?: unknown,
+ options?: TaskHubOptions
+): Promise<void>
+```
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+startNew(
+ orchestratorFunctionName: string,
+ instanceId?: string,
+ input?: unknown
+): Promise<string>
+```
+
+</td>
+<td>
+
+```typescript
+startNew(
+ orchestratorFunctionName: string,
+ options?: StartNewOptions
+): Promise<string>;
+```
+
+</td>
+</tr>
+<tr>
+<td>
+
+```typescript
+waitForCompletionOrCreateCheckStatusResponse(
+ request: HttpRequest,
+ instanceId: string,
+ timeoutInMilliseconds?: number,
+ retryIntervalInMilliseconds?: number
+): Promise<HttpResponse>;
+```
+
+</td>
+<td>
+
+```typescript
+waitForCompletionOrCreateCheckStatusResponse(
+ request: HttpRequest,
+ instanceId: string,
+ waitOptions?: WaitForCompletionOptions
+): Promise<HttpResponse>;
+```
+
+</td>
+</tr>
+</table>
+
+>[!TIP]
+> Make sure to update your `DurableClient` API calls from discrete optional arguments to options objects, where applicable. See the list above for all APIs affected.
++
+## Update calls to callHttp API
+
+In v3.x of `durable-functions`, the `callHttp()` API for `DurableOrchestrationContext` was updated. The following changes were made:
+
+- Accept one options object for all arguments, instead of multiple optional arguments, to be more similar to frameworks such as [Express](https://expressjs.com/).
+- Rename `uri` argument to `url`
+- Rename `content` argument to `body`
+- Deprecate `asynchronousPatternEnabled` flag in favor of `enablePolling`.
+
+If your orchestrations used the `callHttp` API, make sure to update these API calls to conform to the above changes. Find an example below:
++
+# [v4 model](#tab/v4)
+
+```javascript
+const restartResponse = yield context.df.callHttp({
+ method: "POST",
+ url: `https://example.com`,
+ body: "body",
+ enablePolling: false
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+const response = yield context.df.callHttp(
+ "POST",
+ `https://example.com`,
+ "body", // request content
+ undefined, // no request headers
+ undefined, // no token source
+ false // disable polling
+);
+```
++++
+# [v4 model](#tab/v4)
+
+```typescript
+const restartResponse = yield context.df.callHttp({
+ method: "POST",
+ url: `https://example.com`,
+ body: "body",
+ enablePolling: false
+});
+```
+
+# [v3 model](#tab/v3)
+
+```javascript
+const response = yield context.df.callHttp(
+ "POST",
+ `https://example.com`,
+ "body", // request content
+ undefined, // no request headers
+ undefined, // no token source
+ false // disable polling
+);
+```
+++
+> [!TIP]
+> Update your API calls to `callHttp` inside your orchestrations to use the new options object.
++
+## Leverage New Types
+
+The `durable-functions` package now exposes new types that weren't previously exported! This allows you to more strongly type your functions and provide stronger type safety for your orchestrations, entities, and activities! This also improves intellisense for authoring these functions.
+
+Below are some of the new exported types:
+
+- `OrchestrationHandler`, and `OrchestrationContext` for orchestrations
+- `EntityHandler` and `EntityContext` for entities
+- `ActivityHandler` for activities
+- `DurableClient` class for client functions
+
+> [!TIP]
+> Strongly type your functions by leveraging new types exported from the `durable-functions` package!
++
+## Troubleshooting
+
+If you see the following error when running your orchestration code, make sure you are running on at least `v4.16.5` of the [Azure Functions Runtime](../functions-versions.md?tabs=v4&pivots=programming-language-javascript) or at least `v4.0.5095` of [Azure Functions Core Tools](../functions-run-local.md?tabs=v4) if running locally.
+
+```bash
+Exception: The orchestrator can not execute without an OrchestratorStarted event.
+Stack: TypeError: The orchestrator can not execute without an OrchestratorStarted event.
+```
+
+If that doesn't work, or if you encounter any other issues, you can always file a bug report in [our GitHub repo](https://github.com/Azure/azure-functions-durable-js).
azure-functions Durable Functions Node Model Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md
The collection of virtual machines in an availability set that can possibly fail
See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json) ## geo
-A defined boundary for data residency that typically contains two or more regions. The boundaries may be within or beyond national borders and are influenced by tax regulation. Every geo has at least one region. Examples of geos are Asia Pacific and Japan. Also called *geography*.
+A defined boundary for data residency that typically contains two or more regions. The boundaries may be within or beyond national/regional borders and are influenced by tax regulation. Every geo has at least one region. Examples of geos are Asia Pacific and Japan. Also called *geography*.
See [Azure Regions](./availability-zones/cross-region-replication-azure.md) ## geo-replication
See the [Azure offer details page](https://azure.microsoft.com/support/legal/off
The secure web portal used to deploy and manage Azure services. ## region
-An area within a geo that does not cross national borders and contains one or more datacenters. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to as *location*.
+An area within a geo that does not cross national/regional borders and contains one or more datacenters. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to as *location*.
See [Azure Regions](./availability-zones/cross-region-replication-azure.md) ## resource
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
map.setCamera({
}); map.setStyle({
- style: 'satellite_with_roads'
+ style: 'satellite_road_labels'
}); ```
In Azure Maps, load the GeoJSON data into a data source and connect the data sou
map = new atlas.Map('myMap', { center: [-160, 20], zoom: 1,
- style: 'satellite_with_roads',
+ style: 'satellite_road_labels',
//Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: {
azure-monitor Use Azure Monitor Agent Troubleshooter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/use-azure-monitor-agent-troubleshooter.md
The Azure Monitor Agent isn't a service that runs in the context of an Azure Res
## Prerequisites
-The linux Troubleshooter requires Python 2.6+ or any Python3 installed on the machine. In addition, the following Python packages are required to run (all should be present on a default install of Python2 or Python3):
+- Ensure that the AMA agent is installed by looking for the directory C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent on the Windows OS and /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-* on the Linux agent.
+- The linux Troubleshooter requires Python 2.6+ or any Python3 installed on the machine. In addition, the following Python packages are required to run (all should be present on a default install of Python2 or Python3):
|Python Package| Required for Python2? |Required for Python3?| |:|:|:|
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
In Java, many dependency calls can be automatically tracked by using the
You use this call if you want to track calls that the automated tracking doesn't catch. To turn off the standard dependency-tracking module in C#, edit [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) and delete the reference to `DependencyCollector.DependencyTrackingTelemetryModule`. For Java, see
-[Suppressing specific autocollected telemetry](./java-standalone-config.md#suppress-specific-auto-collected-telemetry).
+[Suppressing specific autocollected telemetry](./java-standalone-config.md#suppress-specific-autocollected-telemetry).
### Dependencies in Log Analytics
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Throttling is a concern because it can lead to missed alerts. The condition to t
In summary, we recommend `GetMetric()` because it does pre-aggregation, it accumulates values from all the `Track()` calls, and sends a summary/aggregate once every minute. The `GetMetric()` method can significantly reduce the cost and performance overhead by sending fewer data points while still collecting all relevant information. > [!NOTE]
-> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics, but the metrics implementation is different.
+> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#autocollected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics, but the metrics implementation is different.
## Get started with GetMetric
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 04/21/2023 Last updated : 05/04/2023 ms.devlang: java
# Configuration options: Azure Monitor Application Insights for Java
+This article shows you how to configure Azure Monitor Application Insights for Java.
## Connection string and role name
Connection string and role name are the most common settings you need to get sta
Connection string is required. Role name is important anytime you're sending data from different applications to the same Application Insights resource.
-You'll find more information and configuration options in the following sections.
+More information and configuration options are provided in the following sections.
## Configuration file path
You can specify your own configuration file path by using one of these two optio
* `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.12.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.12.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
The file should contain only the connection string and nothing else.
Not setting the connection string disables the Java agent.
-If you have multiple applications deployed in the same JVM and want them to send telemetry to different instrumentation keys, see [Instrumentation key overrides (preview)](#instrumentation-key-overrides-preview).
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different connection strings, see [Connection string overrides (preview)](#connection-string-overrides-preview).
## Cloud role name
Sampling is also based on trace ID to help ensure consistent sampling decisions
Starting from 3.4.0, rate-limited sampling is available and is now the default. If no sampling has been configured, the default is now rate-limited sampling configured to capture at most
-(approximately) 5 requests per second, along with all the dependencies and logs on those requests.
+(approximately) five requests per second, along with all the dependencies and logs on those requests.
This configuration replaces the prior default, which was to capture all requests. If you still want to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the sampling percentage to 100.
This configuration replaces the prior default, which was to capture all requests
> The rate-limited sampling is approximate because internally it must adapt a "fixed" sampling percentage over time to emit accurate item counts on each telemetry record. Internally, the rate-limited sampling is tuned to adapt quickly (0.1 seconds) to new application loads. For this reason, you shouldn't see it exceed the configured rate by much, or for very long.
-This example shows how to set the sampling to capture at most (approximately) 1 request per second:
+This example shows how to set the sampling to capture at most (approximately) one request per second:
```json {
This example shows how to set the sampling to capture at most (approximately) 1
} ```
-Note that `requestsPerSecond` can be a decimal, so you can configure it to capture less than 1 request per second if you want. For example, a value of `0.5` means capture at most 1 request every 2 seconds.
+The `requestsPerSecond` can be a decimal, so you can configure it to capture less than one request per second if you want. For example, a value of `0.5` means capture at most one request every 2 seconds.
You can also set the sampling percentage by using the environment variable `APPLICATIONINSIGHTS_SAMPLING_REQUESTS_PER_SECOND`. It then takes precedence over the rate limit specified in the JSON configuration.
If you want to collect some other JMX metrics:
In the preceding configuration example:
-* `name` is the metric name that will be assigned to this JMX metric (can be anything).
+* `name` is the metric name that is assigned to this JMX metric (can be anything).
* `objectName` is the [Object Name](https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html) of the JMX MBean that you want to collect. * `attribute` is the attribute name inside of the JMX MBean that you want to collect. Numeric and Boolean JMX metric values are supported. Boolean JMX metrics are mapped to `0` for false and `1` for true.
-If the JMX metric's `objectName` is dynamic and changes on each restart, you can specify it using an
-[object name pattern](https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html),
-e.g. `kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*`.
- ## Custom dimensions If you want to add custom dimensions to all your telemetry:
You can use `${...}` to read the value from the specified environment variable a
## Inherited attribute (preview) Starting from version 3.2.0, if you want to set a custom dimension programmatically on your request telemetry
-and have it inherited by dependency and log telemetry which are captured in the context of that request:
+and have it inherited by dependency and log telemetry, which are captured in the context of that request:
```json {
Connection string overrides allow you to override the [default connection string
} ```
-## Instrumentation key overrides (preview)
-
-This feature is in preview, starting from 3.2.3.
-
-Instrumentation key overrides allow you to override the [default instrumentation key](#connection-string). For example, you can:
-
-* Set one instrumentation key for one HTTP path prefix `/myapp1`.
-* Set another instrumentation key for another HTTP path prefix `/myapp2/`.
-
-```json
-{
- "preview": {
- "instrumentationKeyOverrides": [
- {
- "httpPathPrefix": "/myapp1",
- "instrumentationKey": "12345678-0000-0000-0000-0FEEDDADBEEF"
- },
- {
- "httpPathPrefix": "/myapp2",
- "instrumentationKey": "87654321-0000-0000-0000-0FEEDDADBEEF"
- }
- ]
- }
-}
-```
- ## Cloud role name overrides (preview) This feature is in preview, starting from 3.3.0.
Starting from version 3.2.0, if you want to capture controller "InProc" dependen
## Telemetry processors (preview)
-Yu can use telemetry processors to configure rules that will be applied to request, dependency, and trace telemetry. For example, you can:
+Yu can use telemetry processors to configure rules that are applied to request, dependency, and trace telemetry. For example, you can:
* Mask sensitive data. * Conditionally add custom dimensions.
For more information, see the [Telemetry processor](./java-standalone-telemetry-
> [!NOTE] > If you want to drop specific (whole) spans for controlling ingestion cost, see [Sampling overrides](./java-standalone-sampling-overrides.md).
-## Auto-collected logging
+## Autocollected logging
-Log4j, Logback, JBoss Logging, and java.util.logging are auto-instrumented. Logging performed via these logging frameworks is auto-collected.
+Log4j, Logback, JBoss Logging, and java.util.logging are autoinstrumented. Logging performed via these logging frameworks is autocollected.
Logging is only captured if it: * Meets the level that's configured for the logging framework. * Also meets the level that's configured for Application Insights.
-For example, if your logging framework is configured to log `WARN` (and above) from the package `com.example`,
-and Application Insights is configured to capture `INFO` (and above), Application Insights will only capture `WARN` (and above) from the package `com.example`.
+For example, if your logging framework is configured to log `WARN` (and aforementioned) from the package `com.example`,
+and Application Insights is configured to capture `INFO` (and aforementioned), Application Insights only captures `WARN` (and more severe) from the package `com.example`.
The default level configured for Application Insights is `INFO`. If you want to change this level:
Starting from 3.4.2, you can capture the log markers for Logback and Log4j 2:
} ```
-### Additional log attributes for Logback (preview)
+### Other log attributes for Logback (preview)
Starting from 3.4.3, you can capture `FileName`, `ClassName`, `MethodName`, and `LineNumber`, for Logback:
If needed, you can temporarily re-enable the previous behavior:
} ```
-## Auto-collected Micrometer metrics (including Spring Boot Actuator metrics)
+## Autocollected Micrometer metrics (including Spring Boot Actuator metrics)
-If your application uses [Micrometer](https://micrometer.io), metrics that are sent to the Micrometer global registry are auto-collected.
+If your application uses [Micrometer](https://micrometer.io), metrics that are sent to the Micrometer global registry are autocollected.
-Also, if your application uses [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html), metrics configured by Spring Boot Actuator are also auto-collected.
+Also, if your application uses [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html), metrics configured by Spring Boot Actuator are also autocollected.
To send custom metrics using micrometer:
To send custom metrics using micrometer:
counter.increment(); ```
-1. The metrics will be ingested into the
+1. The metrics are ingested into the
[customMetrics](/azure/azure-monitor/reference/tables/custommetrics) table, with tags captured in the `customDimensions` column. You can also view the metrics in the [metrics explorer](../essentials/metrics-getting-started.md) under the `Log-based metrics` metric namespace.
To send custom metrics using micrometer:
> [!NOTE] > Application Insights Java replaces all non-alphanumeric characters (except dashes) in the Micrometer metric name with underscores. As a result, the preceding `test.counter` metric will show up as `test_counter`.
-To disable auto-collection of Micrometer metrics and Spring Boot Actuator metrics:
+To disable autocollection of Micrometer metrics and Spring Boot Actuator metrics:
> [!NOTE] > Custom metrics are billed separately and might generate extra costs. Make sure to check the [Pricing information](https://azure.microsoft.com/pricing/details/monitor/). To disable the Micrometer and Spring Boot Actuator metrics, add the following configuration to your config file.
Starting from version 3.3.0, you can capture request and response headers on you
The header names are case insensitive.
-The preceding examples will be captured under the property names `http.request.header.my_header_a` and
+The preceding examples are captured under the property names `http.request.header.my_header_a` and
`http.response.header.my_header_b`. Similarly, you can capture request and response headers on your client (dependency) telemetry:
Similarly, you can capture request and response headers on your client (dependen
} ```
-Again, the header names are case insensitive. The preceding examples will be captured under the property names
+Again, the header names are case insensitive. The preceding examples are captured under the property names
`http.request.header.my_header_c` and `http.response.header.my_header_d`. ## HTTP server 4xx response codes
Starting from version 3.3.0, you can change this behavior to capture them as suc
} ```
-## Suppress specific auto-collected telemetry
+## Suppress specific autocollected telemetry
-Starting from version 3.0.3, specific auto-collected telemetry can be suppressed by using these configuration options:
+Starting from version 3.0.3, specific autocollected telemetry can be suppressed by using these configuration options:
```json {
The setting applies to the following metrics:
* **Default performance counters**: For example, CPU and memory * **Default custom metrics**: For example, garbage collection timing * **Configured JMX metrics**: [See the JMX metric section](#jmx-metrics)
-* **Micrometer metrics**: [See the Auto-collected Micrometer metrics section](#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics)
+* **Micrometer metrics**: [See the Autocollected Micrometer metrics section](#autocollected-micrometer-metrics-including-spring-boot-actuator-metrics)
## Heartbeat
For more information, see the [Authentication](./azure-ad-authentication.md) doc
## HTTP proxy
-If your application is behind a firewall and can't connect directly to Application Insights (see [IP addresses used by Application Insights](./ip-addresses.md)), you can configure Application Insights Java 3.x to use an HTTP proxy:
+If your application is behind a firewall and can't connect directly to Application Insights, refer to [IP addresses used by Application Insights](./ip-addresses.md).
+
+To work around this issue, you can configure Application Insights Java 3.x to use an HTTP proxy.
```json {
This example shows what a configuration file looks like with multiple components
} } }
-```
+```
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 04/21/2023 Last updated : 05/04/2023 ms.devlang: java
# Upgrading from Application Insights Java 2.x SDK
-There are typically no code changes when upgrading to 3.x. The 3.x SDK dependencies are just no-op API versions of the
-2.x SDK dependencies, but when used along with the 3.x Java agent, the 3.x Java agent provides the implementation
-for them, and your custom instrumentation will be correlated with all the new
-auto-instrumentation which is provided by the 3.x Java agent.
+There are typically no code changes when upgrading to 3.x. The 3.x SDK dependencies are no-op API versions of the 2.x SDK dependencies. However, when used with the 3.x Java agent, the 3.x Java agent provides the implementation for them. As a result, your custom instrumentation is correlated with all the new autoinstrumentation provided by the 3.x Java agent.
## Step 1: Update dependencies
auto-instrumentation which is provided by the 3.x Java agent.
| `applicationinsights-core` | Update the version to `3.4.3` or later | | | `applicationinsights-web` | Update the version to `3.4.3` or later, and remove the Application Insights web filter your `web.xml` file. | | | `applicationinsights-web-auto` | Replace with `3.4.3` or later of `applicationinsights-web` | |
-| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is auto-instrumented in the 3.x Java agent. |
-| `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is auto-instrumented in the 3.x Java agent. |
-| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is auto-instrumented in the 3.x Java agent. |
+| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is autoinstrumented in the 3.x Java agent. |
+| `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is autoinstrumented in the 3.x Java agent. |
+| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is autoinstrumented in the 3.x Java agent. |
| `applicationinsights-spring-boot-starter` | Replace with `3.4.3` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. | ## Step 2: Add the 3.x Java agent
Add the 3.x Java agent to your JVM command-line args, for example
-javaagent:path/to/applicationinsights-agent-3.4.12.jar ```
-If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
+If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
> [!Note] > If you were using the spring-boot-starter and if you prefer, there is an alternative to using the Java agent. See [3.x Spring Boot](./java-spring-boot.md).
If you were using the Application Insights 2.x Java agent, just replace your exi
See [configuring the connection string](./java-standalone-config.md#connection-string).
-## Additional notes
+## Other notes
The rest of this document describes limitations and changes that you may encounter
-when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful.
+when upgrading from 2.x to 3.x, and some workarounds that you may find helpful.
## TelemetryInitializers
-2.x SDK TelemetryInitializers will not be run when using the 3.x agent.
+2.x SDK TelemetryInitializers don't run when using the 3.x agent.
Many of the use cases that previously required writing a `TelemetryInitializer` can be solved in Application Insights Java 3.x by configuring [custom dimensions](./java-standalone-config.md#custom-dimensions). or using [inherited attributes](./java-standalone-config.md#inherited-attribute-preview). ## TelemetryProcessors
-2.x SDK TelemetryProcessors will not be run when using the 3.x agent.
+2.x SDK TelemetryProcessors don't run when using the 3.x agent.
Many of the use cases that previously required writing a `TelemetryProcessor` can be solved in Application Insights Java 3.x by configuring [sampling overrides](./java-standalone-config.md#sampling-overrides-preview). ## Multiple applications in a single JVM
-This use case is supported in Application Insights Java 3.x using [Instrumentation key overrides (preview)](./java-standalone-config.md#instrumentation-key-overrides-preview).
-
+This use case is supported in Application Insights Java 3.x using
+[Cloud role name overrides (preview)](./java-standalone-config.md#cloud-role-name-overrides-preview) and/or
+[Connection string overrides (preview)](./java-standalone-config.md#connection-string-overrides-preview).
## Operation names
in the Application Insights Portal U/X, for example
:::image type="content" source="media/java-ipa/upgrade-from-2x/operation-names-parameterized.png" alt-text="Screenshot showing operation names parameterized":::
-However, for some applications, you may still prefer the aggregated view in the U/X
-that was provided by the previous operation names, in which case you can use the
-[telemetry processors](./java-standalone-telemetry-processors.md) (preview) feature in 3.x
-to replicate the previous behavior.
+However, for some applications, you may still prefer the aggregated view in the U/X that was provided by the previous operation names. In this case, you can use the [telemetry processors](./java-standalone-telemetry-processors.md) (preview) feature in 3.x to replicate the previous behavior.
-The snippet below configures 3 telemetry processors that combine to replicate the previous behavior.
+The following snippet configures three telemetry processors that combine to replicate the previous behavior.
The telemetry processors perform the following actions (in order): 1. The first telemetry processor is an attribute processor (has type `attribute`), which means it applies to all telemetry that has attributes (currently `requests` and `dependencies`, but soon also `traces`).
- It will match any telemetry that has attributes named `http.method` and `http.url`.
+ It matches any telemetry that has attributes named `http.method` and `http.url`.
- Then it will extract the path portion of the `http.url` attribute into a new attribute named `tempName`.
+ Then it extracts the path portion of the `http.url` attribute into a new attribute named `tempName`.
2. The second telemetry processor is a span processor (has type `span`), which means it applies to `requests` and `dependencies`.
- It will match any span that has an attribute named `tempPath`.
+ It matches any span that has an attribute named `tempPath`.
- Then it will update the span name from the attribute `tempPath`.
+ Then it updates the span name from the attribute `tempPath`.
3. The last telemetry processor is an attribute processor, same type as the first telemetry processor.
- It will match any telemetry that has an attribute named `tempPath`.
+ It matches any telemetry that has an attribute named `tempPath`.
- Then it will delete the attribute named `tempPath`, so that it won't be reported as a custom dimension.
+ Then it deletes the attribute named `tempPath`, so that it's reported as a custom dimension.
``` {
The telemetry processors perform the following actions (in order):
] } }
-```
+```
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
See [this](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azu
**Footnotes** - <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of unhandled exceptions-- <a name="FOOTNOTETWO">2</a>: By default, logging is only collected when that logging is performed at the INFO level or higher. To change this level, see the [configuration options](./java-standalone-config.md#auto-collected-logging).
+- <a name="FOOTNOTETWO">2</a>: By default, logging is only collected when that logging is performed at the INFO level or higher. To change this level, see the [configuration options](./java-standalone-config.md#autocollected-logging).
- <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected when that logging is performed at the WARNING level or higher. To change this level, see the [configuration options](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#usage) and specify `logging_level`. ## Collect custom telemetry
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
To discover the resources that you can autoscale, follow these steps.
## Create your first autoscale setting
+> [!NOTE]
+> In addition to the Autoscale instructions in this article, there's new, automatic scaling in Azure App Service. You'll find more on this capability in the [automatic scaling](../../app-service/manage-automatic-scaling.md) article.
+>
+ Follow the steps below to create your first autoscale setting. 1. Open the **Autoscale** pane in Azure Monitor and select a resource that you want to scale. The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.](../../app-service/quickstart-dotnetcore.md)
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
For situations in which older or archived logs must be intensively queried with
Because [workspace-based Application Insights resources](../app/create-workspace-resource.md) store their data in a Log Analytics workspace, the billing for data ingestion and retention is done by the workspace where the Application Insights data is located. For this reason, you can use all options of the Log Analytics pricing model, including [commitment tiers](#commitment-tiers), along with pay-as-you-go.
+> [!TIP]
+> Looking to adjust retention settings on your Application Insights tables? The table names have changed for workspace based components, see [Application Insights Table Structure](https://learn.microsoft.com/azure/azure-monitor/app/convert-classic-resource#table-structure)
+ Data ingestion and data retention for a [classic Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) follow the same pay-as-you-go pricing as workspace-based resources, but they can't use commitment tiers. Telemetry from ping tests and multi-step tests is charged the same as data usage for other telemetry from your app. Use of web tests and enabling alerting on custom metric dimensions is still reported through Application Insights. There's no data volume charge for using [Live Metrics Stream](../app/live-stream.md).
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Updated
Alerts|[Understanding Azure Active Directory Application Proxy Complex application scenario (preview)](../active-directory/app-proxy/application-proxy-configure-complex-application.md)| Updated the documentation for the common schema used in the alerts payload to contain the detailed information about the fields in the payload of each alert type. | Alerts|[Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md)|Updated list of metrics supported by metric alert rules.| Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Updated the documentation explaining the retry logic used in action groups that use webhooks.|
-Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Added list of countries supported by voice notifications.|
+Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Added list of countries/regions supported by voice notifications.|
Alerts|[Connect ServiceNow to Azure Monitor](alerts/itsmc-secure-webhook-connections-servicenow.md)|Added Tokyo to list of supported ServiceNow webhook integrations.| Application-Insights|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Release notes are now available for each SDK.| Application-Insights|[What is distributed tracing and telemetry correlation?](app/distributed-tracing-telemetry-correlation.md)|Merged our documents related to distributed tracing and telemetry correlation.|
azure-netapp-files Azacsnap Cmd Ref Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-restore.md
na Previously updated : 10/09/2022 Last updated : 05/04/2023
This article provides a guide for running the restore command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+> [!NOTE]
+> The restore command is only available for Azure Large Instance and Azure NetApp Files. Any restores of Azure Managed Disk must be done using the Azure Portal or Azure CLI.
+
## Introduction Doing a volume restore from a snapshot is done using the `azacsnap -c restore` command.
Doing a volume restore from a snapshot is done using the `azacsnap -c restore` c
The `-c restore` command has the following options: -- `--restore snaptovol` Creates a new volume based on a volume snapshot. This command creates a new "cloned" volume for each volume in the configuration file, by default using the latest volume snapshot as the base to create the new volume. For data volumes it's possible to select a snapshot to clone by using the option `--snapshotfilter <Snapshot Name>`, this will only complete if ALL data volumes have that same snapshot. This command does not interrupt the storage replication from primary to secondary. Instead clones of the snapshot are created at the same location and recommended filesystem mountpoints of the cloned volumes are presented. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system).
+- `--restore snaptovol` Creates a new volume based on a volume snapshot. This command creates a new "cloned" volume for each volume in the configuration file, by default using the latest volume snapshot as the base to create the new volume. For data volumes it's possible to select a snapshot to clone by using the option `--snapshotfilter <Snapshot Name>`, this will only complete if ALL data volumes have that same snapshot. This command does not interrupt the storage replication from primary to secondary. Instead clones of the snapshot are created at the same location and recommended filesystem mountpoints of the cloned volumes are presented. If using on Azure Large Instance system this command should be run **in the DR region** (that is, the target fail-over system).
-- `--restore revertvolume` Reverts the target volume to a prior state based on a volume snapshot. Using this command as part of DR Failover into the paired DR region. This command **stops** storage replication from the primary site to the secondary site, and reverts the target DR volume(s) to their latest available snapshot on the DR volumes along with recommended filesystem mountpoints for the reverted DR volumes. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system).
+- `--restore revertvolume` Reverts the target volume to a prior state based on a volume snapshot. Using this command as part of DR Failover into the paired DR region. This command **stops** storage replication from the primary site to the secondary site, and reverts the target DR volume(s) to their latest available snapshot on the DR volumes along with recommended filesystem mountpoints for the reverted DR volumes. If using on Azure Large Instance system this command should be run **in the DR region** (that is, the target fail-over system).
+
+ > [!WARNING]
+ > The revertvolume option is data destructive as any content stored in the volumes after the snapshot chosen to revert to will be lost and is not recoverable.
- > [!NOTE]
- > The sub-command (`--restore revertvolume`) is only available for Azure Large Instance and is not available for Azure NetApp Files.
+ > [!TIP]
+ > After doing a revertvolume it is recommended the volume is remounted to ensure there are no stale file handles. This can be done using `mount -o remount <mount_point>`.
-- `--dbsid <SAP HANA SID>` is the SAP HANA SID being selected from the configuration file to apply the volume restore commands to.
+- `--dbsid <SAP HANA SID>` is the database SID as specified in the configuration file to apply the volume restore commands to.
- `[--configfile <config filename>]` is an optional parameter allowing for custom configuration file names.
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
na Previously updated : 03/02/2023 Last updated : 05/04/2023
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
+## May-2023
+
+### AzAcSnap 8 (Build: 1AC073A)
+
+AzAcSnap 8 is being released with the following fixes and improvements:
+
+- Fixes and Improvements:
+ - Restore (`-c restore`) changes:
+ - New ability to use `-c restore` to revertvolume for Azure NetApp Files.
+ - Backup (`-c backup`) changes:
+ - Fix for incorrect error output when using `-c backup` and the database has ΓÇÿbackintΓÇÖ configured.
+ - Remove lower-case conversion for anfBackup rename-only option using `-c backup` so the snapshot name maintains case of Volume name.
+ - Details (`-c details`) changes:
+ - Fix for listing snapshot details with `-c details` when using Azure Large Instance storage.
+ - Logging enhancements:
+ - Extra logging output to syslog (e.g., /var/log/messages) on failure.
+ - New ΓÇ£mainlogΓÇ¥ (azacsnap.log) to provide a more parse-able high-level log of commands run with success or failure result.
+ - New global settings file (`.azacsnaprc`) to control behavior of azacsnap, including location of ΓÇ£mainlogΓÇ¥ file.
+
+Download the [AzAcSnap 8](https://aka.ms/azacsnap-8) installer.
+ ## Feb-2023 ### AzAcSnap 7a (Build: 1AA8343)
AzAcSnap 7a is being released with the following fixes:
- Enable mounting volumes on HLI (BareMetal) where the volumes have been reverted to a prior state when using `-c restore --restore revertvolume`. - Correctly set ThroughputMiBps on volume clones for Azure NetApp Files volumes in an Auto QoS Capacity Pool when using `-c restore --restore snaptovol`.
+Download the [AzAcSnap 7a](https://aka.ms/azacsnap-7a) installer.
+ ## Dec-2022 ### AzAcSnap 7 (Build: 1A8FDFF)
AzAcSnap 7 is being released with the following fixes and improvements:
- Preliminary support for Azure NetApp Files Backup. - Db2 database support adding options to configure, test, and snapshot backup IBM Db2 in an application consistent manner.
+Download the [AzAcSnap 7](https://aka.ms/azacsnap-7) installer.
+ ## Jul-2022 ### AzAcSnap 6 (Build: 1A5F0B8)
AzAcSnap 6 is being released with the following fixes and improvements:
- ANF Client API Version updated to 2021-10-01. - Change to workflow for handling Backint to re-enable backint configuration should there be a failure when putting SAP HANA in a consistent state for snapshot.
+Download the [AzAcSnap 6](https://aka.ms/azacsnap-6) installer.
+ ## May-2022 ### AzAcSnap v5.0.3 (Build: 20220524.14204) - Patch update to v5.0.2
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-tips.md
na Previously updated : 08/04/2021 Last updated : 05/04/2023
This article provides tips and tricks that might be helpful when you use AzAcSnap.
+## Global settings to control azacsnap behavior
+
+AzAcSnap 8 introduced a new global settings file (`.azacsnaprc`) which must be located in the same (current working) directory as azacsnap is executed in. The filename is `.azacsnaprc` and by using the dot '.' character as the start of the filename makes it hidden to standard directory listings. The file allows global settings controlling the behavior of AzAcSnap to be set. The format is one entry per line with a supported customizing variable and a new overriding value.
+
+Settings, which can be controlled by adding/editing the global settings file are:
+
+- **MAINLOG_LOCATION** which sets the location of the "mainlog" output file, which is called `azacsnap.log` and was introduced in AzAcSnap 8. Values should be absolute paths, for example:
+ - `MAINLOG_LOCATION=/home/azacsnap/bin/logs`
+
+## Mainlog parsing
+
+AzAcSnap 8 introduced a new "mainlog" to provide simpler parsing of runs of AzAcSnap. The inspiration for this file is the SAP HANA backup catalog, which shows when AzAcSnap was started, how long it took, and what the snapshot name is. With AzAcSnap, this idea has been taken further to include information for each of the AzAcSnap commands, specifically the `-c` options, and the file has the following headers:
+
+```output
+DATE_TIME,OPERATION_NAME,STATUS,SID,DATABASE_TYPE,DURATION,SNAPSHOT_NAME,AZACSNAP_VERSION,AZACSNAP_CONFIG_FILE,VOLUME
+```
+
+When AzAcSnap is run it appends to the log the appropriate information depending on the `-c` command used, examples of output are as follows:
+
+```output
+2023-03-29T16:10:57.8643546+13:00,about,started,,,,,8,azacsnap.json,
+2023-03-29T16:10:57.8782148+13:00,about,SUCCESS,,,0:00:00.0258013,,8,azacsnap.json,
+2023-03-29T16:11:55.7276719+13:00,backup,started,PR1,Hana,,pr1_hourly__F47B181A117,8,PR1.json,(data)HANADATA_P;(data)HANASHARED_P;(data)VGvol01;
+2023-03-29T16:13:03.3774633+13:00,backup,SUCCESS,PR1,Hana,0:01:07.7558663,pr1_hourly__F47B181A117,8,PR1.json,(data)HANADATA_P;(data)HANASHARED_P;(data)VGvol01;
+2023-03-29T16:13:30.1312963+13:00,details,started,PR1,Hana,,,8,PR1.json,(data)HANADATA_P;(data)HANASHARED_P;(data)VGvol01;(other)HANALOGBACKUP_P;
+2023-03-29T16:13:33.1806098+13:00,details,SUCCESS,PR1,Hana,0:00:03.1380686,,8,PR1.json,(data)HANADATA_P;(data)HANASHARED_P;(data)VGvol01;(other)HANALOGBACKUP_P;
+```
+
+This format makes the file parse-able with the Linux commands `watch`, `grep`, `head`, `tail`, and `column` to get continuous updates of AzAcSnap backups. An example combination of these commands in single shell script to monitor AzAcSnap is as follows:
+
+```bash
+#!/bin/bash
+#
+# mainlog-watcher.sh
+# Monitor execution of AzAcSnap backup commands
+#
+# These values can be modified as appropriate.
+HEADER_VALUES_TO_EXCLUDE="AZACSNAP_VERSION,VOLUME,AZACSNAP_CONFIG_FILE"
+SCREEN_REFRESH_SECS=2
+#
+# Use AzAcSnap global settings file (.azacsnaprc) if available,
+# otherwise use the default location of the current working directory.
+AZACSNAP_RC=".azacsnaprc"
+if [ -f ${AZACSNAP_RC} ]; then
+ source ${AZACSNAP_RC} 2>
+else
+ MAINLOG_LOCATION="."
+fi
+cd ${MAINLOG_LOCATION}
+echo "Changing current working directory to ${MAINLOG_LOCATION}"
+#
+# Default MAINLOG filename.
+MAINLOG_FILENAME="azacsnap.log"
+#
+# High-level explanation of how commands used.
+# `watch` - continuously monitoring the command output.
+# `column` - provide pretty output.
+# `grep` - filter only backup runs.
+# `head` and `tail` - add/remove column headers.
+watch -t -n ${SCREEN_REFRESH_SECS} \
+ "\
+ echo -n "Monitoring AzAcSnap @ "; \
+ date ; \
+ echo ; \
+ column -N"$(head -n1 ${MAINLOG_FILENAME})" \
+ -d -H "${HEADER_VALUES_TO_EXCLUDE}" \
+ -s"," -t ${MAINLOG_FILENAME} \
+ | head -n1 ; \
+ grep -e "DATE" -e "backup" ${MAINLOG_FILENAME} \
+ | column -N"$(head -n1 ${MAINLOG_FILENAME})" \
+ -d -H "${HEADER_VALUES_TO_EXCLUDE}" \
+ -s"," -t \
+ | tail -n +2 \
+ | tail -n 12 \
+ "
+```
+
+Produces the following output refreshed every two seconds.
+
+```output
+Monitoring AzAcSnap @Fri May 5 11:26:36 NZST 2023
+
+DATE_TIME OPERATION_NAME STATUS SID DATABASE_TYPE DURATION SNAPSHOT_NAME
+2023-05-05T00:00:03.5705791+12:00 backup started PR1 Hana daily_archive__F4F02562F6B
+2023-05-05T00:02:11.5495104+12:00 backup SUCCESS PR1 Hana 0:02:08.2778958 daily_archive__F4F02562F6B
+2023-05-05T03:00:02.8123179+12:00 backup started PR1 Hana pr1_hourly__F4F08C604CD
+2023-05-05T03:01:08.6609302+12:00 backup SUCCESS PR1 Hana 0:01:06.1536665 pr1_hourly__F4F08C604CD
+2023-05-05T06:00:02.8871149+12:00 backup started PR1 Hana pr1_hourly__F4F0F35FAB9
+2023-05-05T06:01:09.0608121+12:00 backup SUCCESS PR1 Hana 0:01:06.4537885 pr1_hourly__F4F0F35FAB9
+2023-05-05T09:00:03.1769836+12:00 backup started PR1 Hana pr1_hourly__F4F15A5F8E2
+2023-05-05T09:01:08.6898938+12:00 backup SUCCESS PR1 Hana 0:01:05.8221419 pr1_hourly__F4F15A5F8E2
+```
++ ## Limit service principal permissions It may be necessary to limit the scope of the AzAcSnap service principal. Review the [Azure RBAC documentation](../role-based-access-control/index.yml) for more details on fine-grained access management of Azure resources.
az role definition create --role-definition '{ \
}' ```
-For restore options to work successfully, the AzAcSnap service principal also needs to be able to create volumes. In this case the role definition needs an additional action, therefore the complete service principal should look like the following example.
+For restore options to work successfully, the AzAcSnap service principal also needs to be able to create volumes. In this case, the role definition needs an extra "Actions" clause added, therefore the complete service principal should look like the following example.
```azurecli az role definition create --role-definition '{ \
azacsnap -c backup --volume data --prefix hana_TEST --retention=1
## Setup automatic snapshot backup
-It is common practice on Unix/Linux systems to use `cron` to automate running commands on a
+It's common practice on Unix/Linux systems to use `cron` to automate running commands on a
system. The standard practice for the snapshot tools is to set up the user's `crontab`.
-An example of a `crontab` for the user `azacsnap` to automate snapshots is below.
+An example of a `crontab` for the user `azacsnap` to automate snapshots follows.
```output MAILTO=""
MAILTO=""
Explanation of the above crontab. -- `MAILTO=""`: by having an empty value this prevents cron from automatically trying to email the user when executing the crontab entry as it would likely end up in the local user's mail file.
+- `MAILTO=""`: by having an empty value this prevents cron from automatically trying to email the local Linux user when executing the crontab entry.
- Shorthand versions of timing for crontab entries are self-explanatory: - `@monthly` = Run once a month, that is, "0 0 1 * *". - `@weekly` = Run once a week, that is, "0 0 * * 0". - `@daily` = Run once a day, that is, "0 0 * * *". - `@hourly` = Run once an hour, that is, "0 * * * *".-- The first five columns are used to designate times, refer to column examples below:
+- The first five columns are used to designate times, refer to the following column examples:
- `0,15,30,45`: Every 15 minutes - `0-23`: Every hour - `*` : Every day
generated successfully.
## Manage AzAcSnap log files
-AzAcSnap writes output of its operation to log files to assist with debugging and to validate correct operation. These log files will continue to grow unless actively managed. Fortunately UNIX based systems have a tool to manage and archive log files called logrotate.
+AzAcSnap writes output of its operation to log files to assist with debugging and to validate correct operation. These log files continue to grow unless actively managed. Fortunately UNIX based systems have a tool to manage and archive log files called logrotate.
-This is an example configuration for logrotate. This configuration will keep a maximum of 31 logs (approximately one month), and when the log files are larger than 10k it will rotate and compress them.
+The following output provides an example configuration for logrotate. This configuration keeps a maximum of 31 logs (approximately one month), and when the log files are larger than 10k it rotates them by renaming with a number added to the filename and compresses them.
```output # azacsnap logrotate configuration file
compress
} ```
-After creating the logrotate.conf file, logrotate should be run on a regular basis to archive AzAcSnap log files accordingly. This can be done using cron. The following is the line of the azacsnap user's crontab which will run logrotate on a daily schedule using the configuration file described above.
+After creating the `logrotate.conf` file, the `logrotate` command should be run regularly to archive AzAcSnap log files accordingly. Automating the `logrotate` command can be done using cron. The following output is one line of the azacsnap user's crontab, this example runs logrotate daily using the configuration file `~/logrotate.conf`.
```output @daily /usr/sbin/logrotate -s ~/logrotate.state ~/logrotate.conf >> ~/logrotate.log
After creating the logrotate.conf file, logrotate should be run on a regular bas
> [!NOTE] > In the example above the logrotate.conf file is in the user's home (~) directory.
-After several days the azacsnap log files should look similar to the following directory listing.
+After several days, the azacsnap log files should look similar to the following directory listing.
```bash ls -ltra ~/bin/logs
ls -ltra ~/bin/logs
The following conditions should be monitored to ensure a healthy system:
-1. Available disk space. Snapshots will slowly consume disk space as keeping older disk blocks
- are retained in the snapshot.
- 1. To help automate disk space management, use the `--retention` and `--trim` options to automatically clean up the old snapshots and database log files.
+1. Available disk space. Snapshots slowly consume disk space based on the block-level change rate, as keeping older disk blocks are retained in the snapshot.
+ 1. To help automate disk space management, use the `--retention` and `--trim` options to automatically cleanup the old snapshots and database log files.
1. Successful execution of the snapshot tools 1. Check the `*.result` file for the success or failure of the latest running of `azacsnap`. 1. Check `/var/log/messages` for output from the `azacsnap` command.
The following conditions should be monitored to ensure a healthy system:
## Delete a snapshot
-To delete a snapshot, use the command `azacsnap -c delete`. It's not possible to delete
-snapshots from the OS level. You must use the correct command (`azacsnap -c delete`) to delete the storage snapshots.
+To delete a snapshot, use the command `azacsnap -c delete`. It's not possible to delete snapshots from the OS level. You must use the correct command (`azacsnap -c delete`) to delete the storage snapshots.
> [!IMPORTANT]
-> Be vigilant when you delete a snapshot. Once deleted, it is **IMPOSSIBLE** to recover
-the deleted snapshots.
+> Be vigilant when you delete a snapshot. Once deleted, it is **IMPOSSIBLE** to recover the deleted snapshots.
## Restore a snapshot
copy is made (`cp /hana/data/H80/mnt00001/.snapshot/hana_hourly.2020-06-17T11304
For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request via the [Azure portal](https://portal.azure.com).
-If you decide to perform the disaster recovery failover, the `azacsnap -c restore --restore revertvolume` command at the DR site will automatically make available the most recent (`/hana/data` and `/hana/logbackups`) volume snapshots to allow for an SAP HANA recovery. Use this command with caution as it breaks replication between production and DR sites.
+If you decide to perform the disaster recovery failover, the `azacsnap -c restore --restore revertvolume` command at the DR site automatically makes available the most recent (`/hana/data` and `/hana/logbackups`) volume snapshots to allow for an SAP HANA recovery. Use this command with caution as it breaks replication between production and DR sites.
## Set up snapshots for 'boot' volumes only > [!IMPORTANT] > This operation applies only to Azure Large Instance.
-In some cases, customers already have tools to protect SAP HANA and only want to configure 'boot' volume snapshots. In this case only the following steps need to completed.
+In some cases, customers already have tools to protect SAP HANA and only want to configure 'boot' volume snapshots. In this case, only the following steps need to be completed.
1. Complete steps 1-4 of the pre-requisites for installation. 1. Enable communication with storage. 1. Download and run the installer to install the snapshot tools. 1. Complete setup of snapshot tools.
-1. Get the list of volumes to be added to the azacsnap configuration file, in this example the Storage User Name is `cl25h50backup` and the Storage IP Address is `10.1.1.10`
+1. Get the list of volumes to be added to the azacsnap configuration file, in this example, the Storage User Name is `cl25h50backup` and the Storage IP Address is `10.1.1.10`
```bash ssh cl25h50backup@10.1.1.10 "volume show -volume *boot*" ```
In some cases, customers already have tools to protect SAP HANA and only want to
A 'boot' snapshot can be recovered as follows:
-1. The customer will need to shut down the server.
+1. The customer needs to shut down the server.
1. After the Server is shut down, the customer will need to open a service request that contains the Machine ID and Snapshot to restore. > Customers can open a service request via the [Azure portal](https://portal.azure.com).
-1. Microsoft will restore the Operating System LUN using the specified Machine ID and Snapshot, and then boot the Server.
-1. The customer will then need to confirm Server is booted and healthy.
+1. Microsoft restores the Operating System LUN using the specified Machine ID and Snapshot, and then boot the Server.
+1. The customer then needs to confirm Server is booted and healthy.
-No additional steps to be performed after the restore.
+No other steps to be performed after the restore.
## Key facts to know about snapshots
Key attributes of storage volume snapshots:
> `.snapshot` is a read-only hidden *virtual* folder providing read-only access to the snapshots. - **Max snapshot:** The hardware can sustain up to 250 snapshots per volume. The snapshot
- command will keep a maximum number of snapshots for the prefix based on the retention
- set on the command line, and will delete the oldest snapshot if it goes beyond the
- maximum number to retain.
+ command keeps a maximum number of snapshots for the prefix based on the retention
+ set on the command line. Any more snapshots, beyond the retention number with the same prefix, are deleted.
- **Snapshot name:** The snapshot name includes the prefix label provided by the customer. - **Size of the snapshot:** Depends upon the size/changes on the database level. - **Log file location:** Log files generated by the commands are output into folders as
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
For Azure Database for MySQL limits, see [Limitations in Azure Database for MySQ
For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for PostgreSQL](../../postgresql/concepts-limits.md).
+## Azure Deployment Environments limits
++ ## Azure Functions limits [!INCLUDE [functions-limits](../../../includes/functions-limits.md)]
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
Title: Move Azure Networking resources to new subscription or resource group
description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 10/28/2022 Last updated : 05/05/2023 # Move networking resources to new resource group or subscription This article describes how to move virtual networks and other networking resources to a new resource group or Azure subscription.
-During the move, your networking resources will operate without interruption.
+During the move, your networking resources operate without interruption.
If you want to move networking resources to a new region, see [Tutorial: Move Azure VMs across regions](../../../resource-mover/tutorial-move-region-virtual-machines.md). ## Dependent resources
-> [!NOTE]
-> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address can't be moved across subscriptions. For virtual machines, you can [disassociate the public IP address](../../../virtual-network/ip-services/remove-public-ip-address-vm.md) before moving across subscriptions.
+When moving a resource, you must also move its dependent networking resources. However, any resource that is associated with a **Standard SKU** public IP address can't be moved across subscriptions. For example, you can't move a VPN Gateway that is associated with a **Standard SKU** public IP address to a new subscription.
-When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). The virtual network assigned to the AKS instance can also be moved, and local network gateways can be in a different resource group.
+To move a virtual machine with a network interface card to a new subscription, you must move all dependent resources. Move the virtual network for the network interface card, all other network interface cards for the virtual network, and the VPN gateways. If a virtual machine is associated with a **Standard SKU** public IP address, [disassociate the public IP address](../../../virtual-network/ip-services/remove-public-ip-address-vm.md) before moving across subscriptions.
-> [!WARNING]
-> Please refrain from moving the virtual network for an AKS cluster. The AKS cluster will stop working if its virtual network is moved.
-
-To move a virtual machine with a network interface card to a new subscription, you must move all dependent resources. Move the virtual network for the network interface card, all other network interface cards for the virtual network, and the VPN gateways.
+If you move the virtual network for an AKS cluster, the AKS cluster stops working. The local network gateways can be in a different resource group.
For more information, see [Scenario for move across subscriptions](../move-resource-group-and-subscription.md#scenario-for-move-across-subscriptions).
To move a peered virtual network, you must first disable the virtual network pee
## VPN Gateways
-You cannot move VPN Gateways across resource groups or subscriptions if they are of Basic SKU. Basic SKU is only meant for test environment usage and doesn't support resource move operation.
+You can't move VPN Gateways across resource groups or subscriptions if they are of Basic SKU. Basic SKU is only meant for test environment usage and doesn't support resource move operation.
## Subnet links
azure-resource-manager User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/user-defined-functions.md
Title: User-defined functions in templates
description: Describes how to define and use user-defined functions in an Azure Resource Manager template (ARM template). Previously updated : 04/12/2021 Last updated : 05/05/2023 # User-defined functions in ARM template
When defining a user function, there are some restrictions:
* The function can only use parameters that are defined in the function. When you use the [parameters](template-functions-deployment.md#parameters) function within a user-defined function, you're restricted to the parameters for that function. * The function can't call other user-defined functions. * The function can't use the [reference](template-functions-resource.md#reference) function or any of the [list](template-functions-resource.md#list) functions.
-* The function can't use the [dateTimeAdd](template-functions-date.md#datetimeadd) function.
* Parameters for the function can't have default values. ## Next steps
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Status code 409 will now be returned from [Re-Index Video](https://api-portal.vi
Azure Video Indexer now supports custom language models in Korean (`ko-KR`) in both the API and portal. * New languages supported for speech-to-text (STT)
- Azure Video Indexer APIs now support STT in Arabic Levantine (ar-SY), English UK dialect (en-GB), and English Australian dialect (en-AU).
+ Azure Video Indexer APIs now support STT in Arabic Levantine (ar-SY), English UK regional language (en-GB), and English Australian regional language (en-AU).
For video upload, we replaced zh-HANS to zh-CN, both are supported but zh-CN is recommended and more accurate.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
Title: Tutorial - Build a serverless real-time chat app with client authentication
-description: A tutorial to walk through how to using Azure Web PubSub service and Azure Functions to build a serverless chat app with client authentication.
--
+description: A tutorial to walk through how to use Azure Web PubSub service and Azure Functions to build a serverless chat app with client authentication.
++ Previously updated : 11/08/2021 Last updated : 05/05/2023 # Tutorial: Create a serverless real-time chat app with Azure Functions and Azure Web PubSub service
In this tutorial, you learn how to:
* [Node.js](https://nodejs.org/en/download/), version 10.x. > [!NOTE] > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
* The [Azure CLI](/cli/azure) to manage Azure resources.
-* (Optional)[ngrok](https://ngrok.com/download) to expose local function as event handler for Web PubSub service. This is optional only for running the function app locally.
-
-# [C#](#tab/csharp)
+# [C# in-process](#tab/csharp-in-process)
* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
* The [Azure CLI](/cli/azure) to manage Azure resources.
-* (Optional)[ngrok](https://ngrok.com/download) to expose local function as event handler for Web PubSub service. This is optional only for running the function app locally.
+# [C# isolated process](#tab/csharp-isolated-process)
+
+* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+
+* The [Azure CLI](/cli/azure) to manage Azure resources.
In this tutorial, you learn how to:
func init --worker-runtime javascript ```
- # [C#](#tab/csharp)
+ # [C# in-process](#tab/csharp-in-process)
```bash func init --worker-runtime dotnet ```
+ # [C# isolated process](#tab/csharp-isolated-process)
+ ```bash
+ func init --worker-runtime dotnet-isolated
+ ```
+ 2. Install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`. # [JavaScript](#tab/javascript)
In this tutorial, you learn how to:
} ```
- # [C#](#tab/csharp)
+ # [C# in-process](#tab/csharp-in-process)
```bash dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub ```
+ # [C# isolated process](#tab/csharp-isolated-process)
+ ```bash
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
+ ```
+ 3. Create an `index` function to read and host a static web page for clients. ```bash func new -n index -t HttpTrigger
In this tutorial, you learn how to:
} ```
- # [C#](#tab/csharp)
+ # [C# in-process](#tab/csharp-in-process)
- Update `index.cs` and replace `Run` function with following codes. ```c# [FunctionName("index")]
In this tutorial, you learn how to:
}; } ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `index.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("index")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
+ {
+ var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
+ _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
+
+ var response = req.CreateResponse();
+ response.WriteString(File.ReadAllText(path));
+ response.Headers.Add("Content-Type", "text/html");
+ return response;
+ }
+ ```
4. Create a `negotiate` function to help clients get service connection url with access token. ```bash
In this tutorial, you learn how to:
> In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`. # [JavaScript](#tab/javascript)
- - Update `negotiate/function.json` and copy following json codes.
+ - Update `negotiate/function.json` and copy following json codes.
```json { "bindings": [
In this tutorial, you learn how to:
] } ```
- - Update `negotiate/index.js` and copy following codes.
+ - Update `negotiate/index.js` and copy following codes.
```js module.exports = function (context, req, connection) { context.res = { body: connection }; context.done(); }; ```
- # [C#](#tab/csharp)
- - Update `negotiate.cs` and replace `Run` function with following codes.
+
+ # [C# in-process](#tab/csharp-in-process)
+ - Update `negotiate.cs` and replace `Run` function with following codes.
```c# [FunctionName("negotiate")] public static WebPubSubConnection Run(
In this tutorial, you learn how to:
return connection; } ```
- - Add below `using` statements in header to resolve required dependencies.
+ - Add `using` statements in header to resolve required dependencies.
```c# using Microsoft.Azure.WebJobs.Extensions.WebPubSub; ```
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("negotiate")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
+ {
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.WriteAsJsonAsync(connectionInfo);
+ return response;
+ }
+ ```
+ 5. Create a `message` function to broadcast client messages through service. ```bash func new -n message -t HttpTrigger
In this tutorial, you learn how to:
}; ```
- # [C#](#tab/csharp)
- - Update `message.cs` and replace `Run` function with following codes.
+ # [C# in-process](#tab/csharp-in-process)
+ - Update `message.cs` and replace `Run` function with following codes.
```c# [FunctionName("message")] public static async Task<UserEventResponse> Run(
In this tutorial, you learn how to:
}; } ```
- - Add below `using` statements in header to resolve required dependencies.
+ - Add `using` statements in header to resolve required dependencies.
```c# using Microsoft.Azure.WebJobs.Extensions.WebPubSub; using Microsoft.Azure.WebPubSub.Common; ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("message")]
+ [WebPubSubOutput(Hub = "simplechat")]
+ public SendToAllAction Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
+ {
+ return new SendToAllAction
+ {
+ Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
+ DataType = request.DataType
+ };
+ }
+ ```
-6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content as below.
+6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content.
```html <html> <body>
In this tutorial, you learn how to:
# [JavaScript](#tab/javascript)
- # [C#](#tab/csharp)
- Since C# project will compile files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+ # [C# in-process](#tab/csharp-in-process)
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
```xml <ItemGroup> <None Update="https://docsupdatetracker.net/index.html"> <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> </None> </ItemGroup>
- ``
+ ```
## Create and Deploy the Azure Function App
-Before you can deploy your function code to Azure, you need to create 3 resources:
+Before you can deploy your function code to Azure, you need to create three resources:
* A resource group, which is a logical container for related resources. * A storage account, which is used to maintain state and other information about your functions. * A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
Use the following commands to create these items.
az login ```
-1. Create a resource group or you can skip by re-using the one of Azure Web PubSub service:
+1. Create a resource group or you can skip by reusing the one of Azure Web PubSub service:
```azurecli az group create -n WebPubSubFunction -l <REGION>
Use the following commands to create these items.
# [JavaScript](#tab/javascript) ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` > [!NOTE]
- > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- # [C#](#tab/csharp)
+ # [C# isolated process](#tab/csharp-isolated-process)
```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` 1. Deploy the function project to Azure:
- After you've successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+ After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
```bash func azure functionapp publish <FUNCIONAPP_NAME>
Go to **Azure portal** -> Find your Function App resource -> **App keys** -> **S
:::image type="content" source="media/quickstart-serverless/func-keys.png" alt-text="Screenshot of get function system keys.":::
-Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use as below. Replace the `<FUNCTIONAPP_NAME>` and `<APP_KEY>` to yours.
+Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use. Replace the `<FUNCTIONAPP_NAME>` and `<APP_KEY>` to yours.
- Hub Name: `simplechat` - URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>**
Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find
:::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler.":::
-> [!NOTE]
-> If you're running the functions in local. You can expose the function url with [ngrok](https://ngrok.com/download) by command `ngrok http 7071` after function start. And configure the Web PubSub service `Event Handler` with url: `https://<NGROK_ID>.ngrok.io/runtime/webhooks/webpubsub`.
- ## Configure to enable client authentication
-Go to **Azure portal** -> Find your Function App resource -> **Authentication**. Click **`Add identity provider`**. Set **App Service authentication settings** to **Allow unauthenticated access**, so you client index page can be visited by anonymous users before redirect to authenticate. Then **Save**.
+Go to **Azure portal** -> Find your Function App resource -> **Authentication**. Click **`Add identity provider`**. Set **App Service authentication settings** to **Allow unauthenticated access**, so your client index page can be visited by anonymous users before redirect to authenticate. Then **Save**.
-Here we choose `Microsoft` as identify provider which will use `x-ms-client-principal-name` as `userId` in the `negotiate` function. Besides, You can configure other identity providers following below links, and don't forget update the `userId` value in `negotiate` function accordingly.
+Here we choose `Microsoft` as identify provider, which uses `x-ms-client-principal-name` as `userId` in the `negotiate` function. Besides, you can configure other identity providers following the links, and don't forget update the `userId` value in `negotiate` function accordingly.
* [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md) * [Facebook](../app-service/configure-authentication-provider-facebook.md)
Here we choose `Microsoft` as identify provider which will use `x-ms-client-prin
## Try the application
-Now you're able to test your page from your function app: `https://<FUNCTIONAPP_NAME>.azurewebsites.net/api/index`. See snapshot below.
+Now you're able to test your page from your function app: `https://<FUNCTIONAPP_NAME>.azurewebsites.net/api/index`. See snapshot.
1. Click `login` to auth yourself. 2. Type message in the input box to chat.
-In the message function, we will broadcast caller's message to all clients and return caller with message `[SYSTEM] ack`. So we can know in sample chat snapshot below, first 4 messages are from current client and last 2 messages are from another client.
+In the message function, we broadcast caller's message to all clients and return caller with message `[SYSTEM] ack`. So we can know in sample chat snapshot, first four messages are from current client and last two messages are from another client.
:::image type="content" source="media/quickstart-serverless/chat-sample.png" alt-text="Screenshot of chat sample.":::
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
Previously updated : 11/01/2021 Last updated : 05/05/2023 # Tutorial: Create a serverless notification app with Azure Functions and Azure Web PubSub service
In this tutorial, you learn how to:
> [!NOTE] > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (V4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
* The [Azure CLI](/cli/azure) to manage Azure resources.
-# [C#](#tab/csharp)
+# [C# in-process](#tab/csharp-in-process)
* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+
+* The [Azure CLI](/cli/azure) to manage Azure resources.
+
+# [C# isolated process](#tab/csharp-isolated-process)
+
+* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
* The [Azure CLI](/cli/azure) to manage Azure resources.
In this tutorial, you learn how to:
* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Python](https://www.python.org/downloads/) (v3.6 ~ v3.9). See [supported Python versions](../azure-functions/functions-reference-python.md#python-version).
+* [Python](https://www.python.org/downloads/) (v3.7+). See [supported Python versions](../azure-functions/functions-reference-python.md#python-version).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (V4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
* The [Azure CLI](/cli/azure) to manage Azure resources.
In this tutorial, you learn how to:
func init --worker-runtime javascript ```
- # [C#](#tab/csharp)
+ # [C# in-process](#tab/csharp-in-process)
```bash func init --worker-runtime dotnet ```
+ # [C# isolated process](#tab/csharp-isolated-process)
+ ```bash
+ func init --worker-runtime dotnet-isolated
+ ```
+ # [Python](#tab/python) ```bash func init --worker-runtime python
In this tutorial, you learn how to:
} ```
- # [C#](#tab/csharp)
+ # [C# in-process](#tab/csharp-in-process)
```bash dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub ```
+ # [C# isolated process](#tab/csharp-isolated-process)
+ ```bash
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
+ ```
++ # [Python](#tab/python) Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support. ```json
In this tutorial, you learn how to:
```bash func new -n index -t HttpTrigger ```
- # [JavaScript](#tab/javascript)
- - Update `index/function.json` and copy following json codes.
+ # [JavaScript](#tab/javascript)
+ - Update `index/function.json` and copy following json codes.
```json { "bindings": [
In this tutorial, you learn how to:
] } ```
- - Update `index/index.js` and copy following codes.
+ - Update `index/index.js` and copy following codes.
```js var fs = require('fs'); var path = require('path');
In this tutorial, you learn how to:
}); } ```-
- # [C#](#tab/csharp)
- - Update `index.cs` and replace `Run` function with following codes.
+
+ # [C# in-process](#tab/csharp-in-process)
+ - Update `index.cs` and replace `Run` function with following codes.
```c# [FunctionName("index")] public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
In this tutorial, you learn how to:
}; } ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `index.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("index")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
+ {
+ var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
+ _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
- # [Python](#tab/python)
- - Update `index/function.json` and copy following json codes.
+ var response = req.CreateResponse();
+ response.WriteString(File.ReadAllText(path));
+ response.Headers.Add("Content-Type", "text/html");
+ return response;
+ }
+ ```
+
+ # [Python](#tab/python)
+ - Update `index/function.json` and copy following json codes.
```json { "scriptFile": "__init__.py",
In this tutorial, you learn how to:
] } ```
- - Update `index/__init__.py` and copy following codes.
+ - Update `index/__init__.py` and copy following codes.
```py import os
In this tutorial, you learn how to:
context.done(); }; ```
- # [C#](#tab/csharp)
- - Update `negotiate.cs` and replace `Run` function with following codes.
+
+ # [C# in-process](#tab/csharp-in-process)
+ - Update `negotiate.cs` and replace `Run` function with following codes.
```c# [FunctionName("negotiate")] public static WebPubSubConnection Run(
In this tutorial, you learn how to:
return connection; } ```
- - Add below `using` statements in header to resolve required dependencies.
+ - Add `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `negotiate.cs` and replace `Run` function with following codes.
```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- ```
- # [Python](#tab/python)
- - Update `negotiate/function.json` and copy following json codes.
- ```json
+ [Function("negotiate")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ [WebPubSubConnectionInput(Hub = "notification")] WebPubSubConnection connectionInfo)
{
- "scriptFile": "__init__.py",
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- },
- {
- "type": "webPubSubConnection",
- "name": "connection",
- "hub": "notification",
- "direction": "in"
- }
- ]
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.WriteAsJsonAsync(connectionInfo);
+ return response;
} ```
- - Update `negotiate/__init__.py` and copy following codes.
- ```py
- import logging
-
- import azure.functions as func
-
- def main(req: func.HttpRequest, connection) -> func.HttpResponse:
- return func.HttpResponse(connection)
- ```
+ # [Python](#tab/python)
+ - Update `negotiate/function.json` and copy following json codes.
+ ```json
+ {
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "hub": "notification",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+ - Update `negotiate/__init__.py` and copy following codes.
+ ```py
+ import logging
+
+ import azure.functions as func
+
+
+ def main(req: func.HttpRequest, connection) -> func.HttpResponse:
+ return func.HttpResponse(connection)
+ ```
5. Create a `notification` function to generate notifications with `TimerTrigger`. ```bash
In this tutorial, you learn how to:
return (baseNum + 2 * floatNum * (Math.random() - 0.5)).toFixed(3); } ```
- # [C#](#tab/csharp)
+ # [C# in-process](#tab/csharp-in-process)
- Update `notification.cs` and replace `Run` function with following codes. ```c# [FunctionName("notification")]
In this tutorial, you learn how to:
return value.ToString("0.000"); } ```
- - Add below `using` statements in header to resolve required dependencies.
+ - Add `using` statements in header to resolve required dependencies.
```c# using Microsoft.Azure.WebJobs.Extensions.WebPubSub; using Microsoft.Azure.WebPubSub.Common; ```+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `notification.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("notification")]
+ [WebPubSubOutput(Hub = "notification")]
+ public SendToAllAction Run([TimerTrigger("*/10 * * * * *")] MyInfo myTimer)
+ {
+ return new SendToAllAction
+ {
+ Data = BinaryData.FromString($"[DateTime: {DateTime.Now}] Temperature: {GetValue(23, 1)}{'\xB0'}C, Humidity: {GetValue(40, 2)}%"),
+ DataType = WebPubSubDataType.Text
+ };
+ }
+
+ private static string GetValue(double baseNum, double floatNum)
+ {
+ var rng = new Random();
+ var value = baseNum + floatNum * 2 * (rng.NextDouble() - 0.5);
+ return value.ToString("0.000");
+ }
+ ```
+
# [Python](#tab/python) - Update `notification/function.json` and copy following json codes. ```json
In this tutorial, you learn how to:
})) ```
-6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content as below.
+6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content.
```html <html> <body>
In this tutorial, you learn how to:
# [JavaScript](#tab/javascript)
- # [C#](#tab/csharp)
- Since C# project will compile files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+ # [C# in-process](#tab/csharp-in-process)
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
```xml <ItemGroup> <None Update="https://docsupdatetracker.net/index.html">
In this tutorial, you learn how to:
:::image type="content" source="media/quickstart-serverless/copy-connection-string.png" alt-text="Screenshot of copying the Web PubSub connection string.":::
- Run command below in the function folder to set the service connection string. Replace `<connection-string>` with your value as needed.
+ Run command in the function folder to set the service connection string. Replace `<connection-string>` with your value as needed.
```bash func settings add WebPubSubConnectionString "<connection-string>"
In this tutorial, you learn how to:
> [!NOTE] > `TimerTrigger` used in the sample has dependency on Azure Storage, but you can use local storage emulator when the Function is running locally. If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`, you'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
- Now you're able to run your local function by command below.
+ Now you're able to run your local function by command.
```bash
- func start
+ func start --port 7071
``` And checking the running logs, you can visit your local host static page by visiting: `http://localhost:7071/api/index`. > [!NOTE]
- > Some browers will automatically redirect to `https` that leads to wrong url. Suggest to use `Edge` and double check the url if rendering is not success.
+ > Some browers automatically redirect to `https` that leads to wrong url. Suggest to use `Edge` and double check the url if rendering is not success.
## Deploy Function App to Azure
-Before you can deploy your function code to Azure, you need to create 3 resources:
+Before you can deploy your function code to Azure, you need to create three resources:
* A resource group, which is a logical container for related resources. * A storage account, which is used to maintain state and other information about your functions. * A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
-Use the following commands to create these item.
+Use the following commands to create these items.
1. If you haven't done so already, sign in to Azure:
Use the following commands to create these item.
az login ```
-1. Create a resource group or you can skip by re-using the one of Azure Web PubSub service:
+1. Create a resource group or you can skip by reusing the one of Azure Web PubSub service:
```azurecli az group create -n WebPubSubFunction -l <REGION>
Use the following commands to create these item.
# [JavaScript](#tab/javascript) ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` > [!NOTE]
- > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+
+ # [C# in-process](#tab/csharp-in-process)
- # [C#](#tab/csharp)
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
+ # [C# isolated process](#tab/csharp-isolated-process)
```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` # [Python](#tab/python) ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime python --runtime-version 3.9 --functions-version 3 --name <FUNCIONAPP_NAME> --os-type linux --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime python --runtime-version 3.9 --functions-version 4 --name <FUNCIONAPP_NAME> --os-type linux --storage-account <STORAGE_NAME>
``` 1. Deploy the function project to Azure:
- After you've successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](../azure-functions/functions-run-local.md) command.
+ After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](../azure-functions/functions-run-local.md) command.
```bash func azure functionapp publish <FUNCIONAPP_NAME> --publish-local-settings
backup Backup Support Matrix Mabs Dpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mabs-dpm.md
Azure Backup can back up DPM/MABS instances that are running any of the followin
**Issue** | **Details** | **Installation** | Install DPM/MABS on a single-purpose machine.<br/><br/> Don't install DPM/MABS on a domain controller, on a machine with the Application Server role installation, on a machine that's running Microsoft Exchange Server or System Center Operations Manager, or on a cluster node.<br/><br/> [Review all DPM system requirements](/system-center/dpm/prepare-environment-for-dpm#dpm-server).
-**Domain** | DPM/MABS should be joined to a domain. Install first, and then join DPM/MABS to a domain. Moving DPM/MABS to a new domain after deployment isn't supported.
+**Domain** | The server on which DPM/MABS will be installed should be joined to a domain before the installation begins. Moving DPM/MABS to a new domain after deployment isn't supported.
**Storage** | Modern backup storage (MBS) is supported from DPM 2016/MABS v2 and later. It isn't available for MABS v1. **MABS upgrade** | You can directly install MABS v4, or upgrade to MABS v4 from MABS v3 UR1 and UR2. [Learn more](backup-azure-microsoft-azure-backup.md#upgrade-mabs). **Moving MABS** | Moving MABS to a new server while retaining the storage is supported if you're using MBS.<br/><br/> The server must have the same name as the original. You can't change the name if you want to keep the same storage pool, and use the same MABS database to store data recovery points.<br/><br/> You'll need a backup of the MABS database because you'll need to restore it.
backup Compliance Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/compliance-offerings.md
# Azure Backup compliance offerings
-Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations that help organizations to comply with national, regional, and industry-specific requirements governing the collection and use of individuals' data.
+Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations that help organizations to comply with national/regional and industry-specific requirements governing the collection and use of individuals' data.
In this article, you'll learn about the various compliance offerings for Azure Backup to ensure that the service is regulated when you use the Azure Backup service.
backup Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-overview.md
Azure Backup service uses the Microsoft Azure Recovery Services (MARS) agent to
## Compliance with standardized security requirements
-To help organizations comply with national, regional, and industry-specific requirements governing the collection and use of individuals' data, Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations. [See the list of compliance certifications](compliance-offerings.md)
+To help organizations comply with national/regional and industry-specific requirements governing the collection and use of individuals' data, Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations. [See the list of compliance certifications](compliance-offerings.md)
## Next steps
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
This article lists current metros containing point-of-presence (POP) locations,
| Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |
-| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macao<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam |
+| Asia | Hong Kong SAR<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong SAR<br />Indonesia<br />Israel<br />Japan<br />Macao<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam |
| Australia and New Zealand | Melbourne, Australia<br />Sydney, Australia<br />Auckland, New Zealand | Australia<br />New Zealand | ## Next steps
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Currently, the Windows agent doesn't reduce memory pressure when other applicati
} ```
-## Stop Windows service
+## Stop service
| Property | Value | |-|-| | Capability Name | StopService-1.0 | | Target type | Microsoft-Agent |
-| Supported OS Types | Windows |
-| Description | Uses the Windows Service Controller APIs to stop a Windows service during the fault, restarting it at the end of the duration or if the experiment is canceled. |
+| Supported OS Types | Windows, Linux |
+| Description | Stops a Windows service or a Linux systemd service during the fault, restarting it at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:agent:stopService/1.0 | | Parameters (key, value) | |
-| serviceName | The name of the Windows service you want to stop. You can run `sc.exe query` in command prompt to explore service names, Windows service friendly names aren't supported. |
+| serviceName | The name of the Windows service or Linux systemd service you want to stop. |
| virtualMachineScaleSetInstances | An array of instance IDs when applying this fault to a Virtual Machine Scale Set. Required for Virtual Machine Scale Sets. | ### Sample JSON
Currently, the Windows agent doesn't reduce memory pressure when other applicati
} ```
+### Limitations
+* Windows: service friendly names aren't supported. Use `sc.exe query` in the command prompt to explore service names.
+* Linux: other service types besides systemd, like sysvinit, aren't supported.
+ ## Time change | Property | Value |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
|-|-| | Capability Name | NetworkLatency-1.0 | | Target type | Microsoft-Agent |
-| Supported OS Types | Windows |
+| Supported OS Types | Windows, Linux |
| Description | Increases network latency for a specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
+| Prerequisites | (Windows) Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkLatency/1.0 | | Parameters (key, value) | | | latencyInMilliseconds | Amount of latency to be applied in milliseconds. |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
|-|-| | Capability Name | NetworkDisconnect-1.0 | | Target type | Microsoft-Agent |
-| Supported OS Types | Windows |
+| Supported OS Types | Windows, Linux |
| Description | Blocks outbound network traffic for specified port range and network block. |
-| Prerequisites | Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
+| Prerequisites | (Windows) Agent must be run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
| Urn | urn:csci:microsoft:agent:networkDisconnect/1.0 | | Parameters (key, value) | | | destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
Configuring the shutdown fault:
} ```
+### Limitations
+Currently, only Virtual Machine Scale Sets configured with the **Uniform** orchestration mode are supported. If your Virtual Machine Scale Set uses **Flexible** orchestration, you can use the ARM virtual machine shutdown fault to shut down selected instances.
+ ## Azure Cosmos DB failover | Property | Value |
Configuring the shutdown fault:
| Capability Name | Reboot-1.0 | | Target type | Microsoft-AzureClusteredCacheForRedis | | Description | Causes a forced reboot operation to occur on the target to simulate a brief outage. |
-| Prerequisites | The target Azure Cache for Redis resource must be a Redis Cluster, which requires that the cache must be a Premium Tier cache. Standard and Basic Tiers aren't supported. |
+| Prerequisites | N/A |
| Urn | urn:csci:microsoft:azureClusteredCacheForRedis:reboot/1.0 | | Fault type | Discrete | | Parameters (key, value) | | | rebootType | The node types where the reboot action is to be performed which can be specified as PrimaryNode, SecondaryNode or AllNodes. |
-| shardId | The ID of the shard to be rebooted. |
+| shardId | The ID of the shard to be rebooted. Only relevant for Premium Tier caches. |
### Sample JSON
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Currently, you can only enable certain resource types for Chaos Studio VNet inje
## Enabling VNet injection To use Chaos Studio with VNet injection, you need to meet the following requirements. 1. The `Microsoft.ContainerInstance` and `Microsoft.Relay` resource providers must be registered with your subscription.
-1. The VNet where Chaos Studio resources will be injected needs to have two subnets, named `ChaosStudioContainerSubnet` and `ChaosStudioRelaySubnet`. Other subnet names can't be used.
+1. The VNet where Chaos Studio resources will be injected must have two subnets: a container subnet, which is used for the Chaos Studio containers that will be injected into your private network, and a relay subnet, which is used to forward communication from Chaos Studio to the containers inside the private network.
1. Both subnets need at least `/28` in address space. For example, an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
- 1. `ChaosStudioContainerSubnet` must be delegated to `Microsoft.ContainerInstance/containerGroups`.
-1. When enabling the desired resource as a target so you can use it in Chaos Studio experiments, the following properties must be set:
- 1. Set `properties.subnets.containerSubnetId` to the ID for `ChaosStudioContainerSubnet`.
- 1. Set `properties.subnets.relaySubnetId` to the ID for `ChaosStudioRelaySubnet`.
+ 1. The container subnet must be delegated to `Microsoft.ContainerInstance/containerGroups`.
+ 1. The subnets can be arbitrarily named, but we recommend `ChaosStudioContainerSubnet` and `ChaosStudioRelaySubnet`.
+1. When enabling the desired resource as a target so you can use it in Chaos Studio experiments, the following properties must be set:
+ 1. Set `properties.subnets.containerSubnetId` to the ID for the container subnet.
+ 1. Set `properties.subnets.relaySubnetId` to the ID for the relay subnet.
++
+If you're using the Azure portal to enable a private resource as a Chaos Studio target, Chaos Studio currently only recognizes subnets named `ChaosStudioContainerSubnet` and `ChaosStudioRelaySubnet`. If these subnets don't exist, the portal workflow can create them automatically.
+
+If you're using the CLI, the container and relay subnets can have any name (subject to the resource naming guidelines). You just need to specify the appropriate IDs when enabling the resource as a target.
## Example: Use Chaos Studio with a private AKS cluster
Now your private AKS cluster can be used with Chaos Studio! Use the following in
1. Create two subnets in the VNet you want to inject Chaos Studio resources into (in this case, the private AKS cluster's VNet):
- - `ChaosStudioContainerSubnet`
+ - Container subnet (example name: `ChaosStudioContainerSubnet`)
- Delegate the subnet to the `Microsoft.ContainerInstance/containerGroups` service. - This subnet must have at least /28 in address space.
- - `ChaosStudioRelaySubnet`
+ - Relay subnet (example name: `ChaosStudioRelaySubnet`)
- This subnet must have at least /28 in address space. ```azurecli
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
> [!WARNING] > Injecting faults can impact your application or service. Be careful not to disrupt customers.
-The Chaos Studio API provides support for starting experiments programmatically. You can also use the armclient and the Azure CLI to execute these commands from the console. Examples below are for the Azure CLI.
+The Chaos Studio API provides support for starting experiments programmatically. You can also use the ARM client and the Azure CLI to execute these commands from the console. These examples are for the Azure CLI.
> [!Warning] > These APIs are still under development and subject to change. ## REST APIs
-The Squall REST APIs can be used to start and stop experiments, query target status, query experiment status, and query and delete subscription configurations. The `AZ CLI` utility can be used to perform these actions from the command line.
+The Chaos Studio REST APIs can be used to:
+* Start, stop, and manage experiments
+* View and manage targets
+* Query experiment status
+* Query and delete subscription configurations
+
+The `AZ CLI` utility can be used to perform these actions from the command line.
> [!TIP]
-> To get more verbose output with the AZ CLI, append **--verbose** to the end of each command. This will return more metadata when commands execute, including **x-ms-correlation-request-id** which aids in debugging.
+> To get more verbose output with the AZ CLI, append `--verbose` to the end of each command. This will return more metadata when commands execute, including `x-ms-correlation-request-id` which aids in debugging.
### Chaos Provider Commands
-#### Enumerate details about the Microsoft.Chaos Resource Provider
+#### List details about the `Microsoft.Chaos` Resource Provider
```azurecli az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos?api-version={apiVersion}" --resource "https://management.azure.com"
az rest --method get --url "https://management.azure.com/subscriptions/{subscrip
az rest --method get --url "https://management.azure.com/providers/Microsoft.Chaos/operations?api-version={apiVersion}" --resource "https://management.azure.com" ```
-#### List Chaos Provider Configurations
+#### List Chaos provider configurations
```azurecli az rest --method get --urlΓÇ»"https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.chaos/chaosProviderConfigurations/?api-version={apiVersion}" --resource "https://management.azure.com" --verbose ```
-#### Create Chaos Provider Configuration
+#### Create Chaos provider configuration
```azurecli az rest --method put --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.chaos/chaosProviderConfigurations/{chaosProviderType}?api-version={apiVersion}" --body @{providerSettings.json} --resource "https://management.azure.com"
az rest --method put --url "https://management.azure.com/subscriptions/{subscrip
### Chaos Target and Agent Commands
-#### List All the Targets or Agents Under a Subscription
+#### List all the Targets or Agents under a subscription
```azurecli az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos/chaosTargets/?api-version={apiVersion}" --url-parameter "chaosProviderType={chaosProviderType}" --resource "https://management.azure.com"
az rest --method get --url "https://management.azure.com/subscriptions/{subscrip
az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Chaos/chaosExperiments?api-version={apiVersion}" --resource "https://management.azure.com" ```
-#### Get an experiment configuration details by name
+#### Get an experiment's configuration details by name
```azurecli az rest --method get --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --resource "https://management.azure.com"
az rest --method delete --url "https://management.azure.com/{experimentId}?api-v
az rest --method post --url "https://management.azure.com/{experimentId}/start?api-version={apiVersion}" ```
-#### Get statuses (History) of an experiment
+#### Get past statuses of an experiment
```azurecli az rest --method get --url "https://management.azure.com/{experimentId}/statuses?api-version={apiVersion}" --resource "https://management.azure.com"
az rest --method get --url "https://management.azure.com/{experimentId}/statuses
az rest --method get --url "https://management.azure.com/{experimentId}/status?api-version={apiVersion}" --resource "https://management.azure.com" ```
-#### Cancel (Stop) an experiment
+#### Cancel (stop) an experiment
```azurecli az rest --method get --url "https://management.azure.com/{experimentId}/cancel?api-version={apiVersion}" --resource "https://management.azure.com"
az rest --method get --url "https://management.azure.com/{experimentId}/executio
| Parameter Name | Definition | Lookup | | | | | | {apiVersion} | Version of the API to be used when executing the command provided | Can be found in the [API documentation](/rest/api/chaosstudio/) |
-| {experimentId} | Azure Resource Id for the experiment | Can be found in the [Chaos Studio Experiment Portal Blade](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
+| {experimentId} | Azure Resource ID for the experiment | Can be found in the [Chaos Studio Experiment Page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
| {chaosProviderType} | Type or Name of Chaos Studio Provider | Available providers can be found in the [List of current Provider Config Types](chaos-studio-fault-providers.md) | | {experimentName.json} | JSON containing the configuration of the chaos experiment | Generated by the user |
-| {subscriptionId} | Subscription Id where the target resource is located | Can be found in the [Subscriptions Portal Blade](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) |
-| {resourceGroupName} | Name of the resource group where the target resource is located | Can be fond in the [Resource Groups Portal Blade](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) |
-| {executionDetailsId} | Execution Id of an experiment execution | Can be found in the [Chaos Studio Experiment Portal Blade](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
+| {subscriptionId} | Subscription ID where the target resource is located | Can be found in the [Subscriptions Page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) |
+| {resourceGroupName} | Name of the resource group where the target resource is located | Can be fond in the [Resource Groups Page](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) |
+| {executionDetailsId} | Execution ID of an experiment execution | Can be found in the [Chaos Studio Experiment Page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
Use cases for the speech-to-text REST API for short audio are limited. Use it on
Before you use the speech-to-text REST API for short audio, consider the following limitations:
-* Requests that use the REST API for short audio and transmit audio directly can contain no more than 30 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
+* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
* The REST API for short audio returns only final results. It doesn't provide partial results. * [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md). * [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to Text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
You can use real-time speech-to-text with the [Speech SDK](speech-sdk.md) or the
| Max blob container size | N/A | 5 GB | | Max number of blobs per container | N/A | 10000 | | Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
+| Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file |
#### Model customization
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Document Translation .NET and Python client-library SDKs are now generally avail
### [Text and document support for more than 100 languages](https://www.microsoft.com/translator/blog/2021/10/11/translator-now-translates-more-than-100-languages/) * Translator service has added [text and document language support](language-support.md) for the following languages:
- * **Bashkir**. A Turkic language spoken by approximately 1.4 million native speakers. It has three dialect groups: Southern, Eastern, and Northwestern.
+ * **Bashkir**. A Turkic language spoken by approximately 1.4 million native speakers. It has three regional language groups: Southern, Eastern, and Northwestern.
* **Dhivehi**. Also known as Maldivian, it's an Indo-Aryan language primarily spoken in the island country of Maldives. * **Georgian**. A Kartvelian language that is the official language of Georgia. It has approximately 4 million speakers. * **Kyrgyz**. A Turkic language that is the official language of Kyrgyzstan.
These additions bring the total number of languages supported in Translator to 1
* **New release**: Custom Translator V2 phase 1 is available. The newest version of Custom Translator will roll out in two phases to provide quicker translation and quality improvements, and allow you to keep your training data in the region of your choice. *See* [Microsoft Translator blog: Custom Translator: Introducing higher quality translations and regional data residency](https://www.microsoft.com/translator/blog/2020/08/05/custom-translator-v2-is-now-available/)
-### [Text and document translation support for two Kurdish dialects](https://www.microsoft.com/translator/blog/2020/08/20/translator-adds-two-kurdish-dialects-for-text-translation/)
+### [Text and document translation support for two Kurdish regional languages](https://www.microsoft.com/translator/blog/2020/08/20/translator-adds-two-kurdish-dialects-for-text-translation/)
* **Northern (Kurmanji) Kurdish** (15 million native speakers) and **Central (Sorani) Kurdish** (7 million native speakers). Most Kurdish texts are written in Kurmanji and Sorani.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/language-support.md
If you have content expressed in a less frequently used language, you can try La
| Nepali | `ne` | 2021-01-05 | | Norwegian | `no` | | | Norwegian Nynorsk | `nn` | |
-| Oriya | `or` | |
+| Odia | `or` | |
| Pasht | `ps` | | | Persian | `fa` | | | Polish | `pl` | |
If you have content expressed in a less frequently used language, you can try La
| Kannada | `kn` | 2022-10-01 | | Malayalam | `ml` | 2022-10-01 | | Marathi | `mr` | 2022-10-01 |
-| Oriya | `or` | 2022-10-01 |
+| Odia | `or` | 2022-10-01 |
| Punjabi | `pa` | 2022-10-01 | | Tamil | `ta` | 2022-10-01 | | Telugu | `te` | 2022-10-01 |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Use this article to learn which natural languages are supported by the NER featu
|Mongolian|`mn`| | | |Nepali|`ne`| | | |Norwegian (Bokmal)|`no`| |`nb` also accepted|
-|Oriya|`or`| | |
+|Odia|`or`| | |
|Pashto|`ps`| | | |Persian|`fa`| | | |Polish|`pl`| | |
cognitive-services Chatgpt Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/chatgpt-quickstart.md
Previously updated : 03/21/2023
-zone_pivot_groups: openai-quickstart
Last updated : 05/03/2023
+zone_pivot_groups: openai-quickstart-new
recommendations: false
Use this article to get started using Azure OpenAI.
::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python SDK quickstart](includes/chatgpt-python.md)]
cognitive-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-feature-evaluation.md
Removing features with low importance scores can help speed up model training by
Location information also typically benefits from creating broader classifications. For example, a latitude-longitude coordinates such as Lat: 47.67402┬░ N, Long: 122.12154┬░ W is too precise and forces the model to learn latitude and longitude as distinct dimensions. When you're trying to personalize based on location information, it helps to group location information in larger sectors. An easy way to do that is to choose an appropriate rounding precision for the lat-long numbers, and combine latitude and longitude into "areas" by making them one string. For example, a good way to represent Lat: 47.67402┬░ N, Long: 122.12154┬░ W in regions approximately a few kilometers wide would be "location":"34.3 , 12.1". - **Expand feature sets with extrapolated information**
-You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it's relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
+You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national/regional cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it's relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
## Next steps
communication-services Enable Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/enable-closed-captions.md
Provide translation ΓÇô Use the translation functions provided to provide transl
## Privacy concerns
-Closed captions are only available during the call or meeting for the participant that has selected to enable captions, ACS doesn't store these captions anywhere. Many countries and states have laws and regulations that apply to storing of data. It is your responsibility to use the closed captions in compliance with the law should you choose to store any of the data generated through closed captions. You must obtain consent from the parties involved in a manner that complies with the laws applicable to each participant.
+Closed captions are only available during the call or meeting for the participant that has selected to enable captions, ACS doesn't store these captions anywhere. Many countries/regions and states have laws and regulations that apply to storing of data. It is your responsibility to use the closed captions in compliance with the law should you choose to store any of the data generated through closed captions. You must obtain consent from the parties involved in a manner that complies with the laws applicable to each participant.
Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when closed captions are enabled in a Teams call or meeting and being stored.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
More details on eligible subscription types are as follows:
| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go | | Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
-\* In some countries, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
+\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Create a support ticket or reach out to acstns@microsoft.com for assistance with your application.
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
This article answers commonly asked questions about the SMS serv
Azure Communication Services customers can use Azure Event Grid to receive incoming messages. Follow this [quickstart](../../quickstarts/sms/handle-sms-events.md) to set up your event-grid to receive messages.
-### Can I receive messages from any country on toll-free numbers?
-Toll-free numbers are not capable of sending or receiving messages to/from countries outside of US, CA, and PR.
+### Can I receive messages from any country/region on toll-free numbers?
-### Can I receive messages from any country on short codes?
-Short codes are domestic numbers and are not capable of sending or receiving messages to/from outside of the country it was registered for. *Example: US short code can only send and receive messages to/from US recipients.*
+Toll-free numbers are not capable of sending or receiving messages to/from countries/regions outside of US, CA, and PR.
+
+### Can I receive messages from any country/region on short codes?
+Short codes are domestic numbers and are not capable of sending or receiving messages to/from outside of the country/region it was registered for. *Example: US short code can only send and receive messages to/from US recipients.*
### How are messages sent to landline numbers treated?
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
To enable the Azure Communications Gateway application, add the application ID o
1. Optionally, check the application ID of the service principal to confirm that you're adding the right application. 1. Search for `AzureCommunicationsGateway` with the search bar: it's under the **Azure Active Directory** subheading.
- 1. On the overview page, check that the value of **Object ID** is `8502a0ec-c76d-412f-836c-398018e2312b`.
+ 1. On the overview page, check that the value of **Application ID** is `8502a0ec-c76d-412f-836c-398018e2312b`.
1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration). 1. Add a new **Application Id**, pasting in the following value. This value is the application ID for Azure Communications Gateway. ```
communications-gateway Monitoring Azure Communications Gateway Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitoring-azure-communications-gateway-data-reference.md
This section lists all the automatically collected metrics collected for Azure C
| Active Calls | Count | Count of the total number of active calls. | | Active Emergency Calls | Count | Count of the total number of active emergency calls.|
-For more information, see a list of [all metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
## Metric Dimensions
communications-gateway Prepare For Live Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md
Your onboarding team must register the test enterprise tenant that you chose in
## 3. Assign numbers to test users in your tenant
-1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name has the suffix `azcog`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process.
+1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `commsgw`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process.
1. In your test tenant, request service from your company. 1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant. 1. Select **Voice** > **Operators**.
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
We recommend that you use an existing Azure Active Directory tenant for Azure Co
To add the Project Synergy application:
+1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
+ 1. Open PowerShell.
+ 1. Run the following command and check whether `AzureAD` appears in the output.
+ ```azurepowershell
+ Get-Module -ListAvailable
+ ```
+ 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. Close your current PowerShell window.
+ 1. Open PowerShell as an admin.
+ 1. Run the following command.
+ ```azurepowershell
+ Install-Module AzureAD
+ ```
+ 1. Close your PowerShell admin window.
1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin. 1. Select **Azure Active Directory**. 1. Select **Properties**. 1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID. 1. Open PowerShell.
-1. If you don't have the Azure Active Directory module installed, install it:
- ```azurepowershell
- Install-Module AzureAD
- ```
1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4. ```azurepowershell Connect-AzureAD -TenantId "<AADTenantID>"
Azure Communications Gateway contains services that need to access the Operator
Do the following steps in the tenant that contains your Project Synergy application.
+1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
+ 1. Open PowerShell.
+ 1. Run the following command and check whether `AzureAD` appears in the output.
+ ```azurepowershell
+ Get-Module -ListAvailable
+ ```
+ 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. Close your current PowerShell window.
+ 1. Open PowerShell as an admin.
+ 1. Run the following command.
+ ```azurepowershell
+ Install-Module AzureAD
+ ```
+ 1. Close your PowerShell admin window.
1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin. 1. Select **Azure Active Directory**. 1. Select **Properties**. 1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID. 1. Open PowerShell.
-1. If you didn't install the Azure Active Directory module as part of [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy), install it:
- ```azurepowershell
- Install-Module AzureAD
- ```
1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4. ```azurepowershell Connect-AzureAD -TenantId "<AADTenantID>"
confidential-computing Attestation Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/attestation-solutions.md
+
+ Title: Attestation
+description: Learn what attestation is and how to use it at Microsoft
+++++ Last updated : 05/02/2023+++
+# Attestation
+Computing is an essential part of our daily lives, powering everything from our smartphones to critical infrastructure. However, increasing regulatory environments, prevalence of cyberattacks, and growing sophistication of attackers have made it difficult to trust the authenticity and integrity of the computing technologies we depend on. Attestation, a technique to verify the software and hardware components of a system, is a critical process for establishing trust and ensuring that computing technologies we rely on are trustworthy.
+
+In this document, we are looking at what attestation is, types of attestation Microsoft offers today, and how customers can utilize these types of attestation scenarios in Microsoft solutions.
+
+## What is Attestation?
+In remote attestation, ΓÇ£one peer (the "Attester") produces believable information about itself ("Evidence") to enable a remote peer (the "Relying Party") to decide whether to consider that Attester a trustworthy peer. Remote attestation procedures are facilitated by an additional vital party (the "Verifier").ΓÇ¥ In simpler terms, attestation is a way of proving that a computer system is trustworthy. To make better sense of what attestation is and how it works in practice, we compare the process of attestation in computing to real-life examples with passports and background checks. The definition and models we use in this document are outlined in the Internet Engineering Task ForceΓÇÖs (IETF) Remote Attestation procedureS (RATs) Architecture document. To learn more, please see [Internet Engineering Task Force: Remote ATtestation procedureS (RATs) Architecture](https://www.ietf.org/rfc/rfc9334.html).
+
+### Passport Model
+#### Passport Model - Immigration Desk
+1. A Citizen wants a passport to travel to a Foreign Country. The Citizen submits evidence requirements to their Host Country.
+2. Host country receives the evidence of policy compliance from the individual and verifies whether the supplied evidence proves that the individual complies with the policies for being issues a passport.
+ - Birth certificate is valid and hasn't been altered.
+ - Issuer of the birth certificate is trusted
+ - Individual isn't part of a restricted list
+3. If the Host Country decides the evidence meets their policies, the Host Country will issue a passport for a Citizen.
+4. The Citizen travels to a foreign nation, but first must present their passport to the Foreign Country Border Patrol Agent for evaluation.
+5. The Foreign Country Border Patrol Agent checks a series of rules on the passport before trusting it
+ - Passport is authentic and hasn't been altered.
+ - Passport was produced by a trusted country.
+ - Passport isn't expired or revoked.
+ - Passport conforms to policy of a Visa or age requirement.
+6. The Foreign Country Border Patrol Agent approves of the Passport and the Citizen can enter the Foreign Country.
+
+![Diagram of remote attestation with the passport model for an immigration desk.](media/attestation-solutions/passport-model-immigration.png)
+
+#### Passport Model - Computing
+1. A Trusted Execution Environment (TEE), otherwise known as an Attester, wants to retrieve secrets from a Secrets Manager, also known as a Relying Party. To retrieve secrets from the Secrets Manager, the TEE must prove that itΓÇÖs trustworthy and genuine to the Secrets Manager. The TEE submits its evidence to a Verifier to prove itΓÇÖs trustworthy and genuine, that includes the hash of its executed code, hash of its build environment, and its certificate generated by its manufacturer.
+2. The Verifier, an attestation service, evaluates whether the evidence given by the TEE meets the following requirements for being trusted.
+ - Certificate is valid and has not been altered.
+ - Issuer of the certificate is trusted
+ - TEE evidence isn't part of a restricted list
+3. If the Verifier decides the evidence meets the defined policies, the Verifier will create an Attestation Result and give it to the TEE.
+4. The TEE wants to exchange secrets with the Secrets Manager, but first must present their Attestation Result to the Secrets Manager for evaluation.
+5. The Secrets Manager checks a series of rules on the Attestation Result before trusting it
+ - Attestation Result is authentic and hasn't been altered.
+ - Attestation Result was produced by a trusted authority.
+ - Attestation Result isn't expired or revoked.
+ - Attestation Result conforms to configured administrator policy.
+6. The Secrets Manager approves of the Attestation Result and exchanges secrets with the TEE.
+
+![Diagram of remote attestation with the passport model for computing.](media/attestation-solutions/passport-model-computing.png)
+
+### Background Check Model
+#### Background Check ΓÇô School Verification
+1. A Person is doing a background check with a potential Employer to obtain a job. The Person submits their education background of the School they attended to the potential Employer.
+2. The Employer retrieves the education background from the person and forwards this to the respective School to be verified.
+3. If the School decides if the given education background from the Person meets their records, the School issues an Attestation Result for the Employer.
+4. The School sends the issued Attestation Result that verifiers the PersonΓÇÖs education background matches their records.
+5. The Employer, otherwise known as the Relying Party, may check a series of rules on the Attestation Result before trusting it.
+ - Attestation Result is authentic, hasn't been altered, and truly comes from the School.
+ - Attestation Result was produced by a trusted School.
+6. The Employer approves of the Attestation Result and hires the Person.
+
+![Diagram of remote attestation with the background check model for education background.](media/attestation-solutions/background-check-model-school.png)
+
+#### Background Check ΓÇô Computing
+1. A Trusted Execution Environment (TEE), otherwise known as an Attester, wants to retrieve secrets from a Secrets Manager, also known as a Relying Party. To retrieve secrets from the Secrets Manager, the TEE must prove that itΓÇÖs trustworthy and genuine. The TEE sends its evidence to the Secrets Manager to prove itΓÇÖs trustworthy and genuine, that includes the hash of its executed code, hash of its build environment, and its certificate generated by its manufacturer.
+2. The Secrets Manager retrieves the evidence from the TEE and forwards it to the Verifier to be verified.
+3. The Verifier service evaluates whether the evidence given by the TEE meets defined policy requirements for being trusted.
+ - Certificate is valid and hasn't been altered.
+ - Issuer of the certificate is trusted.
+ - TEE evidence isn't part of a restricted list.
+4. If the Verifier decides the evidence meets the defined policies, the Verifier creates an Attestation Result for the TEE and send it to the Secrets Manager
+5. The Secrets Manager checks a series of rules on the Attestation Result before trusting it
+ - Attestation Result is authentic and hasn't been altered.
+ - Attestation Result was produced by a trusted authority.
+ - Attestation Result isn't expired or revoked.
+ - Attestation Result conforms to configured administrator policy.
+6. The Secrets Manager approves of the Attestation Result and exchanges secrets with the TEE.
+
+![Diagram of remote attestation with the background check model for computing.](media/attestation-solutions/background-check-model-computing.png)
+
+## Types of Attestation
+Attestation services can be utilized in two distinct ways that each provide their own benefits.
+
+### Cloud Provider
+At Microsoft, we provide [Microsoft Azure Attestation (MAA)](https://azure.microsoft.com/products/azure-attestation) as customer-facing service and a framework for attesting Trusted Execution Environments (TEEs) like Intel Software Guard Extensions (SGX) enclaves, virtualization-based security (VBS) enclaves, Trusted Platform Modules (TPMs), Trusted Launch and Confidential Virtual Machines. Benefits from using a cloud providerΓÇÖs attestation service such as Azure Attestation includes,
+- Freely available
+- Source code is available for government customers via the Microsoft Code Center Premium Tool
+- Protects data while in use by operating within an Intel SGX enclave.
+- Attests multiple TEEs in one single solution.
+- Offers a strong Service Level Agreement (SLA)
+
+### Build Your Own
+Customers can create their own attestation mechanisms to trust their computing infrastructure from tools provided by cloud and hardware providers. Building your own attestation processes for Microsoft solutions may require the use of [Trusted Hardware Identity Management (THIM)](../security/fundamentals/trusted-hardware-identity-management.md), a solution that handles cache management of certificates for all trusted execution environments (TEE) residing in Azure and provides trusted computing base (TCB) information to enforce a minimum baseline for attestation solutions. Benefits from building and using your own attestation service includes,
+- 100% control over the attestation processes to meet regulatory and compliance requirements
+- Customization of integrations with other computing technologies
+
+## Attestation Scenarios at Microsoft
+There are many attestation scenarios at Microsoft that enable customers to choose between the Cloud Provider and Build Your own attestation service scenarios. For each section, we look at Azure offerings and the attestation scenarios available.
+
+### VMs with Application Enclaves
+[VMs with Application Enclaves](confidential-computing-enclaves.md) are enabled by Intel SGX, which allows organizations to create enclaves that protect data, and keep data encrypted while the CPU processes the data. Customers can attest Intel SGX enclaves in Azure with MAA and on their own.
+- [Intel SGX Attestation Home Page](attestation.md)
+- [Cloud Provider: Intel SGX Sample Code Attestation with MAA](/samples/azure-samples/microsoft-azure-attestation/sample-code-for-intel-sgx-attestation-using-microsoft-azure-attestation/)
+- [Build Your Own: Open Enclave Attestation](https://github.com/openenclave/openenclave/blob/master/samples/attestation/README.md)
+
+### Confidential Virtual Machines
+[Confidential Virtual Machines](confidential-vm-overview.md) are enabled by AMD SEV-SNP, which allows organizations to have hardware-based isolation between virtual machines, and underlying host management code (including hypervisor). Customers can attest their managed confidential virtual machines in Azure with MAA and on their own.
+- [Confidential VMs Attestation Home Page](https://github.com/Azure/confidential-computing-cvm-guest-attestation/blob/main/cvm-guest-attestation.md#azure-confidential-vms-attestation-guidance--faq)
+- [Cloud Provider: What is guest attestation for confidential VMs?](guest-attestation-confidential-vms.md)
+- [Build Your Own: Fetch and verify raw AMD SEV-SNP report on your own](https://github.com/Azure/confidential-computing-cvm-guest-attestation/blob/main/cvm-guest-attestation.md#i-dont-trust-maa-or-the-library-you-are-asking-me-to-install-in-my-vm-but-i-do-trust-the-underlying-hcl-firmware-how-can-i-fetch-and-verify-raw-amd-sev-snp-report-on-my-own)
+
+### Confidential Containers on Azure Container Instances
+[Confidential Containers on Azure Container Instances](confidential-containers.md) provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security, data privacy and runtime code integrity goals. Confidential containers run in a hardware backed Trusted Execution Environment (TEE) that provides intrinsic capabilities like data integrity, data confidentiality and code integrity.
+- [Cloud Provider: Attestation in Confidential containers on Azure Container Instances](https://aka.ms/caciattestation)
confidential-computing Use Cases Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md
Partnered health facilities contribute private health data sets to train an ML m
### Protecting privacy with IoT and smart-building solutions
-Many countries have strict privacy laws about gathering and using data on peopleΓÇÖs presence and movements inside buildings. This may include data that is directly personally identifiable data from CCTV or security badge scans. Or, indirectly identifiable where different sets of sensor data could be considered personally identifiable when grouped together.
+Many countries/regions have strict privacy laws about gathering and using data on peopleΓÇÖs presence and movements inside buildings. This may include data that is directly personally identifiable data from CCTV or security badge scans. Or, indirectly identifiable where different sets of sensor data could be considered personally identifiable when grouped together.
Privacy needs to be balanced with cost & environmental needs where organizations are keen to understand occupancy/movement in-order to provide the most efficient use of energy to heat and light a building.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Previously updated : 06/02/2022 Last updated : 5/4/2023
Features include:
The following code is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container. ```json
-"containers": [
- {
- "name": "main",
- "image": "[parameters('container_image')]",
- "env": [
- {
- "name": "HTTP_PORT",
- "value": "80"
- },
- {
- "name": "SECRET_VAL",
- "secretRef": "mysecret"
- }
- ],
- "resources": {
- "cpu": 0.5,
- "memory": "1Gi"
- },
- "volumeMounts": [
- {
- "mountPath": "/myfiles",
- "volumeName": "azure-files-volume"
- }
- ]
- "probes":[
+{
+ "properties": {
+ "template": {
+ "containers": [
{
- "type":"liveness",
- "httpGet":{
- "path":"/health",
- "port":8080,
- "httpHeaders":[
- {
- "name":"Custom-Header",
- "value":"liveness probe"
- }]
+ "name": "main",
+ "image": "[parameters('container_image')]",
+ "env": [
+ {
+ "name": "HTTP_PORT",
+ "value": "80"
},
- "initialDelaySeconds":7,
- "periodSeconds":3
- },
- {
- "type":"readiness",
- "tcpSocket":
- {
- "port": 8081
- },
- "initialDelaySeconds": 10,
- "periodSeconds": 3
- },
- {
- "type": "startup",
- "httpGet": {
- "path": "/startup",
+ {
+ "name": "SECRET_VAL",
+ "secretRef": "mysecret"
+ }
+ ],
+ "resources": {
+ "cpu": 0.5,
+ "memory": "1Gi"
+ },
+ "volumeMounts": [
+ {
+ "mountPath": "/appsettings",
+ "volumeName": "appsettings-volume"
+ }
+ ],
+ "probes": [
+ {
+ "type": "liveness",
+ "httpGet": {
+ "path": "/health",
"port": 8080, "httpHeaders": [
- {
- "name": "Custom-Header",
- "value": "startup probe"
- }]
+ {
+ "name": "Custom-Header",
+ "value": "liveness probe"
+ }
+ ]
+ },
+ "initialDelaySeconds": 7,
+ "periodSeconds": 3
+ },
+ {
+ "type": "readiness",
+ "tcpSocket": {
+ "port": 8081
+ },
+ "initialDelaySeconds": 10,
+ "periodSeconds": 3
},
- "initialDelaySeconds": 3,
- "periodSeconds": 3
- }]
+ {
+ "type": "startup",
+ "httpGet": {
+ "path": "/startup",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "startup probe"
+ }
+ ]
+ },
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
+ }
+ ]
+ }
+ ]
+ },
+ "initContainers": [
+ {
+ "name": "init",
+ "image": "[parameters('init_container_image')]",
+ "resources": {
+ "cpu": 0.25,
+ "memory": "0.5Gi"
+ },
+ "volumeMounts": [
+ {
+ "mountPath": "/appsettings",
+ "volumeName": "appsettings-volume"
+ }
+ ]
+ }
+ ]
+ ...
}
-],
-
+ ...
+}
``` | Setting | Description | Remarks |
The following code is an example of the `containers` array in the [`properties.t
<a id="allocations"></a>
-In the Consumption plan, the total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
-
-| vCPUs (cores) | Memory |
-|||
-| `0.25` | `0.5Gi` |
-| `0.5` | `1.0Gi` |
-| `0.75` | `1.5Gi` |
-| `1.0` | `2.0Gi` |
-| `1.25` | `2.5Gi` |
-| `1.5` | `3.0Gi` |
-| `1.75` | `3.5Gi` |
-| `2.0` | `4.0Gi` |
-
-Alternatively, the Consumption workload profile in the Consumption + Dedicated plan structure, the total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
-
-| vCPUs (cores) | Memory |
-|||
-| `0.25` | `0.5Gi` |
-| `0.5` | `1.0Gi` |
-| `0.75` | `1.5Gi` |
-| `1.0` | `2.0Gi` |
-| `1.25` | `2.5Gi` |
-| `1.5` | `3.0Gi` |
-| `1.75` | `3.5Gi` |
-| `2.0` | `4.0Gi` |
-| `2.25` | `4.5Gi` |
-| `2.5` | `5.0Gi` |
-| `2.75` | `5.5Gi` |
-| `3.0` | `6.0Gi` |
-| `3.25` | `6.5Gi` |
-| `3.5` | `7.0Gi` |
-| `3.75` | `7.5Gi` |
-| `4.0` | `8.0Gi` |
+In the Consumption plan and the Consumption workload profile in the [Consumption + Dedicated plan structure](plans.md#consumption-dedicated), the total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
+
+| vCPUs (cores) | Memory | Consumption plan | Consumption workload profile |
+|||||
+| `0.25` | `0.5Gi` | Γ£ö | Γ£ö |
+| `0.5` | `1.0Gi` | Γ£ö | Γ£ö |
+| `0.75` | `1.5Gi` | Γ£ö | Γ£ö |
+| `1.0` | `2.0Gi` | Γ£ö | Γ£ö |
+| `1.25` | `2.5Gi` | Γ£ö | Γ£ö |
+| `1.5` | `3.0Gi` | Γ£ö | Γ£ö |
+| `1.75` | `3.5Gi` | Γ£ö | Γ£ö |
+| `2.0` | `4.0Gi` | Γ£ö | Γ£ö |
+| `2.25` | `4.5Gi` | | Γ£ö |
+| `2.5` | `5.0Gi` | | Γ£ö |
+| `2.75` | `5.5Gi` | | Γ£ö |
+| `3.0` | `6.0Gi` | | Γ£ö |
+| `3.25` | `6.5Gi` | | Γ£ö |
+| `3.5` | `7.0Gi` | | Γ£ö |
+| `3.75` | `7.5Gi` | | Γ£ö |
+| `4.0` | `8.0Gi` | | Γ£ö |
- The total of the CPU requests in all of your containers must match one of the values in the *vCPUs* column. - The total of the memory requests in all your containers must match the memory value in the memory column in the same row of the CPU column.
When you use a Dedicated workload profile in the Consumption + Dedicated plan st
## Multiple containers
-You can define multiple containers in a single container app to implement the [sidecar pattern](/azure/architecture/patterns/sidecar). The containers in a container app share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md).
+In advanced scenarios, you can run multiple containers in a single container app. The containers share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md). There are two ways to run multiple containers in a container app: [sidecar containers](#sidecar-containers) and [init containers](#init-containers).
-Examples of sidecar containers include:
+### Sidecar containers
+
+You can define multiple containers in a single container app to implement the [sidecar pattern](/azure/architecture/patterns/sidecar). Examples of sidecar containers include:
- An agent that reads logs from the primary app container on a [shared volume](storage-mounts.md?pivots=aca-cli#temporary-storage) and forwards them to a logging service. - A background process that refreshes a cache used by the primary app container in a shared volume.
Examples of sidecar containers include:
> [!NOTE] > Running multiple containers in a single container app is an advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. In most situations where you want to run multiple containers, such as when implementing a microservice architecture, deploy each service as a separate container app.
-To run multiple containers in a container app, add more than one container in the containers array of the container app template.
+To run multiple containers in a container app, add more than one container in the `containers` array of the container app template.
+
+### <a name="init-containers"></a>Init containers (preview)
+
+You can define one or more [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in a container app. Init containers run before the primary app container and can be used to perform initialization tasks such as downloading data or preparing the environment.
+
+Init containers are defined in the `initContainers` array of the container app template. The containers run in the order they are defined in the array and must complete successfully before the primary app container starts.
## Container registries
The following example shows how to configure Azure Container Registry credential
{ ... "configuration": {
- "secrets": [
- {
- "name": "acr-password",
- "value": "my-acr-password"
- }
- ],
-...
- "registries": [
- {
- "server": "myacr.azurecr.io",
- "username": "someuser",
- "passwordSecretRef": "acr-password"
- }
- ]
+ "secrets": [
+ {
+ "name": "acr-password",
+ "value": "my-acr-password"
+ }
+ ],
+ ...
+ "registries": [
+ {
+ "server": "myacr.azurecr.io",
+ "username": "someuser",
+ "passwordSecretRef": "acr-password"
+ }
+ ]
} } ```
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following IP rules are required when using NSGs on both the Consumption only
- If you're running HTTP servers, you might need to add ports `80` and `443`. - Adding deny rules for some ports and protocols with lower priority than `65000` may cause service interruption and unexpected behavior.
+- Don't explicitly deny the Azure DNS address `168.63.128.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function.
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Secrets Key Vault references aren't supported in PowerShell.
> [!NOTE]
-> If you're using [UDR With Azure Firewall](./networking.md#user-defined-routes-udrpreview), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview).
+> If you're using [UDR With Azure Firewall](./networking.md#user-defined-routes-udrpreview), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall. Refer to [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview) to decide which additional service tags you need.
#### Key Vault secret URI and secret rotation
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
IP addresses are broken down into the following types:
Virtual network integration depends on a dedicated subnet. How IP addresses are allocated in a subnet and what subnet sizes are supported depends on which plan you're using in Azure Container Apps. Selecting an appropriately sized subnet for the scale of your Container Apps is important as subnet sizes can't be modified post creation in Azure. -- Consumption only architecture:
+- **Consumption only architecture:**
- /23 is the minimum subnet size required for virtual network integration. - Container Apps reserves a minimum of 60 IPs for infrastructure in your VNet, and the amount may increase up to 256 addresses as your container environment scales. - As your app scales, a new IP address is allocated for each new replica. -- Workload profiles architecture:
+- **Workload profiles architecture:**
- /27 is the minimum subnet size required for virtual network integration. - The subnet you're integrating your container app with must be delegated to `Microsoft.App/environments`. - 11 IP addresses are automatically reserved for integration with the subnet. When your apps are running on workload profiles, the number of IP addresses required for infrastructure integration doesn't vary based on the scale of your container apps.
Azure creates a default route table for your virtual networks upon create. By im
#### Configuring UDR with Azure Firewall - preview:
-UDR is only supported on the workload profiles architecture. For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md).
+UDR is only supported on the workload profiles architecture. The following application and network rules must be added to the allowlist for your firewall depending on which resources you are using.
-The following FQDNs and service tags must be added to the allowlist for your firewall depending on which resources you are using:
+> [!Note]
+> For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md).
-- For all scenarios, you need to allow the `MicrosoftContainerRegistry` and its dependency `AzureFrontDoor.FirstParty` service tags through your Azure Firewall. Alternatively, you can add the following FQDNs: *mcr.microsoft.com* and **.data.mcr.microsoft.com*.-- If you're using Azure Container Registry (ACR), you need to add the `AzureContainerRegistry` service tag and the **.blob.core.windows.net* FQDN in the Azure Firewall.-- If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add the following FQDNs to your firewall: *hub.docker.com*, *registry-1.docker.io*, and *production.cloudflare.docker.com*.-- If you're using [Azure Key Vault references](./manage-secrets.md#reference-secret-from-key-vault), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall.
+##### Azure Firewall - Application Rules
+
+Application rules allow or deny traffic based on the application layer. The following outbound firewall application rules are required based on scenario.
+
+| Scenarios | FQDNs | Description |
+|--|--|--|
+| All scenarios | *mcr.microsoft.com*, **.data.mcr.microsoft.com* | These FQDNs for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these application rules or the network rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
+| Azure Container Registry (ACR) | *Your-ACR-address*, **.blob.windows.net* | These FQDNs are required when using Azure Container Apps with ACR and Azure Firewall. |
+| Azure Key Vault | *Your-Azure-Key-Vault-address*, *login.microsoft.com* | These FQDNs are required in addition to the service tag required for the network rule for Azure Key Vault. |
+| Docker Hub Registry | *hub.docker.com*, *registry-1.docker.io*, *production.cloudflare.docker.com* | If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add these FQDNs to the firewall. |
+
+##### Azure Firewall - Network Rules
+
+Network rules allow or deny traffic based on the network and transport layer. The following outbound firewall network rules are required based on scenario.
+
+| Scenarios | Service Tag | Description |
+|--|--|--|
+| All scenarios | *MicrosoftContainerRegistry*, *AzureFrontDoorFirstParty* | These Service Tags for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these network rules or the application rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
+| Azure Container Registry (ACR) | *AzureContainerRegistry* | When using ACR with Azure Container Apps, you will need to configure these application rules used by Azure Container Registry. |
+| Azure Key Vault | *AzureKeyVault*, *AzureActiveDirectory* | These service tags are required in addition to the FQDN for the application rule for Azure Key Vault. |
+
+> [!Note]
+> For Azure resources you are using with Azure Firewall not listed above, please refer to the [service tags documentation](../virtual-network/service-tags-overview.md#available-service-tags).
### NAT gateway integration - preview
With the workload profiles architecture (preview), you can fully secure your ing
## DNS -- **Custom DNS**: If your VNet uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. If you don't use the Azure recursive resolvers, the Container Apps environment can't function.
+- **Custom DNS**: If your VNet uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. When configuring your NSG or Firewall, do not block the `168.63.129.16` address, otherwise, your Container Apps environment won't function.
- **VNet-scope ingress**: If you plan to use VNet-scope [ingress](ingress-overview.md) in an internal Container Apps environment, configure your domains in one of the following ways:
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Your virtual networks in Azure have default route tables in place when you creat
## Configure firewall policies > [!NOTE]
-> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. For example, the FQDNs *mcr.microsoft.com* and **.data.mcr.microsoft.com* are required for all scenarios. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview).
+> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. Please refer to [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview) to determine which service tags you need.
Now, all outbound traffic from your container app is routed to the firewall. Currently, the firewall still allows all outbound traffic through. In order to manage what outbound traffic is allowed or denied, you need to configure firewall policies.
cosmos-db Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/compliance.md
Azure Cosmos DB is available in all Azure regions. Microsoft makes the following
- **Azure China** is available through a unique partnership between Microsoft and 21Vianet, one of the countryΓÇÖs largest Internet providers. - **Azure Government** is available from five regions in the United States to US government agencies and their partners. Two regions (US DoD Central and US DoD East) are reserved for exclusive use by the US Department of Defense. - **Azure Government Secret** is available from three regions exclusively for the needs of US Government and designed to accommodate classified Secret workloads and native connectivity to classified networks.-- **Azure Government Top Secret** serves the national security mission and empowers leaders across the Intelligence Community (IC), Department of Defense (DoD), and Federal Civilian agencies to process national security workloads classified at the US Top Secret level.
+- **Azure Government Top Secret** serves America's security mission and empowers leaders across the Intelligence Community (IC), Department of Defense (DoD), and Federal Civilian agencies to process security workloads classified at the US Top Secret level.
To help you meet your own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of [compliance offerings](../compliance/index.yml)) and depth (number of [customer-facing services](https://azure.microsoft.com/services/) in assessment scope). For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-always-encrypted.md
> [!IMPORTANT] > A breaking change has been introduced with the 1.0 release of our encryption packages. If you created data encryption keys and encryption-enabled containers with prior versions, you will need to re-create your databases and containers after migrating your client code to 1.0 packages.
-Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure Cosmos DB. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the database.
+Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national/regional identification numbers (for example, U.S. social security numbers), stored in Azure Cosmos DB. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the database.
Always Encrypted brings client-side encryption capabilities to Azure Cosmos DB. Encrypting your data client-side can be required in the following scenarios:
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
Azure Synapse Analytics COPY statement directly supports Azure Blob, Azure Data
| Supported source data store type | Supported format | Supported source authentication type | | :-- | -- | :-- |
- | [Azure Blob](connector-azure-blob-storage.md) | [Delimited text](format-delimited-text.md) | Account key authentication, shared access signature authentication, service principal authentication, managed identity authentication |
+ | [Azure Blob](connector-azure-blob-storage.md) | [Delimited text](format-delimited-text.md) | Account key authentication, shared access signature authentication, service principal authentication, system-assigned managed identity authentication |
| &nbsp; | [Parquet](format-parquet.md) | Account key authentication, shared access signature authentication | | &nbsp; | [ORC](format-orc.md) | Account key authentication, shared access signature authentication |
- | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | [Delimited text](format-delimited-text.md)<br/>[Parquet](format-parquet.md)<br/>[ORC](format-orc.md) | Account key authentication, service principal authentication, managed identity authentication |
+ | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | [Delimited text](format-delimited-text.md)<br/>[Parquet](format-parquet.md)<br/>[ORC](format-orc.md) | Account key authentication, service principal authentication, system-assigned managed identity authentication |
>[!IMPORTANT] >- When you use managed identity authentication for your storage linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.
If the requirements aren't met, the service checks the settings and automaticall
| Supported source data store type | Supported source authentication type | | :-- | :- |
- | [Azure Blob](connector-azure-blob-storage.md) | Account key authentication, managed identity authentication |
+ | [Azure Blob](connector-azure-blob-storage.md) | Account key authentication, system-assigned managed identity authentication |
| [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md) | Service principal authentication |
- | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | Account key authentication, managed identity authentication |
+ | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | Account key authentication, system-assigned managed identity authentication |
>[!IMPORTANT] >- When you use managed identity authentication for your storage linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
The SAP ODP framework is contained in all up-to-date SAP NetWeaver based systems
The SAP CDC connector supports basic authentication or Secure Network Communications (SNC), if SNC is configured.
+## Current limitations
+
+Here are current limitations of the SAP CDC connector in Data Factory:
+
+- You can't reset or delete ODQ subscriptions in Data Factory (use transaction ODQMON in the connected SAP system for this).
+- You can't use SAP hierarchies with the solution.
+ ## Prerequisites To use this SAP CDC connector, refer to [Prerequisites and setup for the SAP CDC connector](sap-change-data-capture-prerequisites-configuration.md).
To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-c
## Transform data with the SAP CDC connector
-SAP CDC datasets can be used as source in mapping data flow. The raw SAP ODP change feed is difficult to interpret and updating it correctly to a sink can be a challenge. Mapping data flow takes care of this complexity by automatically evaluating technical attributes that are provided by the ODP framework (like ODQ_CHANGEMODE). Users can therefore concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc.
+The raw SAP ODP change feed is difficult to interpret and updating it correctly to a sink can be a challenge. For example, technical attributes associated with each row (like ODQ_CHANGEMODE) have to be understood to apply the changes to the sink correctly. Also, an extract of change data from ODP can contain multiple changes to the same key (for example, the same sales order). It is therefore important to respect the order of changes, while at the same time optimizing performance by processing the changes in parallel.
+Moreover, managing a change data capture feed also requires keeping track of state, for example in order to provide built-in mechanisms for error recovery.
+Azure data factory mapping data flows take care of all such aspects. Therefore, SAP CDC connectivity is part of the mapping data flow experience. This allows users to concentrate on the required transformation logic without having to bother with the technical details of data extraction.
To get started, create a pipeline with a mapping data flow.
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
The section "SAP to stage", which is periodically updated while the extraction f
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-monitor-data-flow.png" alt-text="Screenshot of the data flow monitor.":::
+When a data flow run has finished successfully, the data flow monitor shows detailed information about the extraction process from SAP.
+Besides runtime information like start time and duration, you also find the number of rows copied from SAP in the line **Rows copied** and the number of rows passed on from the source to the next transformation (in this case the sink transformation) in the line **Rows calculated**. Note that **Rows calculated** can be smaller than **Rows copied**: after extracting the changed data records from the SAP system, the data flow performs a deduplication of the changed rows based on the key definition. Only the most recent record is passed further down the data flow.
## Monitor data extractions on SAP systems
In the subscription, a list of requests corresponds to mapping data flow runs in
Based on the timestamp in the first row, find the line that corresponds to the mapping data flow run you want to analyze. If the number of rows shown equals the number of rows read by the mapping data flow, you've verified that Data Factory has read and transferred the data as provided by the SAP system. In this scenario, we recommend that you consult with the team that's responsible for your SAP system.
-## Current limitations
-
-Here are current limitations of the SAP CDC connector in Data Factory:
--- You can't reset or delete ODQ subscriptions in Data Factory (use ODQMON for this).-- You can't use SAP hierarchies with the solution.- ## Next steps Learn more about [SAP connectors](industry-sap-connectors.md).
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 03/21/2023 Last updated : 05/01/2023 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
|Edge resource group |Select the resource group to add the image to. | |Save image as | The name for the VM image that you're creating from the VHD you uploaded to the storage account. | |OS type |Choose from Windows or Linux as the operating system of the VHD you'll use to create the VM image. |
- |VM generation |Choose Gen 1 or Gen 2 as the generation of the image you'll use to create the VM. |
+ |VM generation |Choose Gen 1 or Gen 2 as the generation of the image you'll use to create the VM. For Gen 2 VMs, secure boot is enabled by default. |
![Screenshot showing the Add image page for a virtual machine with the Add button highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
databox-online Azure Stack Edge Mini R Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-safety.md
Only charge the battery pack when it is a part of the Azure Stack Edge Mini R de
![Warning Icon 8](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:**
-* In lieu of using the provided AC/DC power supply, this system also has the option to use a field provided Type 2590 Battery. In this case, the end user shall verify that it meets all applicable safety, transportation, environmental, and any other national/local regulations.
+* In lieu of using the provided AC/DC power supply, this system also has the option to use a field provided Type 2590 Battery. In this case, the end user shall verify that it meets all applicable safety, transportation, environmental, and any other national/regional and local regulations.
* When operating the system with Type 2590 Battery, operate the battery within the conditions of use specified by the battery manufacturer. ![Warning Icon 9](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:**
The Netgear A6150 WiFi USB Adapter provided with this equipment is intended to
**Netgear A6150 Specific Absorption Rate (SAR):** 0.54 W/kg averaged over 10g of tissue ΓÇâ
-This device may operate in all member states of the EU. Observe national and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150-5350 MHz frequency range in the following countries:
+This device may operate in all member states of the EU. Observe national/regional and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150-5350 MHz frequency range in the following countries:
![EU countries that require indoor use only](./media/azure-stack-edge-mini-r-safety/mini-r-safety-eu-indoor-use-only.png)
databox-online Azure Stack Edge Pro 2 Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-safety.md
For Model: DB040-W only
Hereby, declares that this device is in compliance with EU Directive 2014/53/EU and UK Radio Equipment Regulations 2017 (S.I. 2017/1206). The full text of the EU and UK declaration of conformity are available on the [product webpage](https://azure.microsoft.com/products/azure-stack/edge/#overview).
-This device may operate in all member states of the EU. Observe national and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150 - 5350 MHz frequency range in the following countries:
+This device may operate in all member states of the EU. Observe national/regional and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150 - 5350 MHz frequency range in the following countries:
:::image type="content" source="media/azure-stack-edge-pro-2-safety/icon-eu-countries-indoor-use.png" alt-text="List of EU countries":::
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 03/02/2023 Last updated : 05/05/2023 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver provides the following benefits:
## Regional availability
-Azure DNS Private Resolver is available in the following regions:
-
-| Americas | Europe | Asia & Africa |
-|||-|
-| East US | West Europe | East Asia |
-| East US 2 | North Europe | Southeast Asia |
-| Central US | UK South | Japan East |
-| South Central US | France Central | Korea Central |
-| North Central US | Sweden Central | South Africa North|
-| West Central US | Switzerland North| Australia East |
-| West US 2 | | Central India |
-| West US 3 | | |
-| Canada Central | | |
-| Brazil South | | |
+See [Azure Products by Region - Azure DNS](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=dns&regions=all).
## Data residency
event-hubs Create Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/create-schema-registry.md
This article shows you how to create a schema group with schemas in a schema reg
:::image type="content" source="./media/create-schema-registry/namespace-page.png" alt-text="Image showing the Schema Registry page in the Azure portal"::: 1. On the **Create Schema Group** page, do these steps: 1. Enter a **name** for the schema group.
- 1. For **Serialization type**, pick **Avro** serialization format that applies to all schemas in the schema group. The **JSON** serialization is not supported yet.
- 1. Select a **compatibility mode** for all schemas in the group. For **Avro**, forward and backward compatibility modes are supported.
- 1. Then, select **Create** to create the schema group.
+ 1. For **Serialization type**, pick **Avro** serialization format that applies to all schemas in the schema group.
+
+ > [!NOTE]
+ > Currently, Schema Registry doesn't support **JSON** serialization.
+ 3. Select a **compatibility mode** for all schemas in the group. For **Avro**, forward and backward compatibility modes are supported.
+ 4. Then, select **Create** to create the schema group.
:::image type="content" source="./media/create-schema-registry/create-schema-group-page.png" alt-text="Image showing the page for creating a schema group"::: 1. Select the name of the **schema group** in the list of schema groups.
firewall Firewall Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-sftp.md
You can use Azure Firewall to access a storage account container via SFTP. Azure PowerShell is used to deploy a firewall in a virtual network and configured with DNAT rules to translate the SFTP traffic to the storage account container. The storage account container is configured with a private endpoint to allow access from the firewall. To connect to the container, you use the firewall public IP address and the storage account container name. In this article, you:
firewall Firewall Structured Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-structured-logs.md
Register-AzResourceProvider -ProviderNamespace Microsoft.Network
``` > [!NOTE]
-> It can take several minutes for this to take effect. Run the following Azure PowerShell command to see the `ResistratonState`:
+> It can take several minutes for this to take effect. Run the following Azure PowerShell command to see the `RegistrationState`:
> > `Get-AzProviderFeature -FeatureName "AFWEnableStructuredLogs" -ProviderNamespace "Microsoft.Network"` >
->When the `ResistratonState` is *Registered*, consider performing an update on Azure Firewall for the change to take effect immediately.
+>When the `RegistrationState` is *Registered*, consider performing an update on Azure Firewall for the change to take effect immediately.
Run the following Azure PowerShell command to turn this feature off:
hdinsight How To Use Hbck2 Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/how-to-use-hbck2-tool.md
+
+ Title: How to use Apache HBase HBCK2 Tool
+description: Learn how to use HBase HBCK2 Tool
++++++ Last updated : 05/05/2023+
+# How to use Apache HBase HBCK2 Tool
+
+Learn how to use HBase HBCK2 Tool.
+
+## HBCK2 Overview
+
+HBCK2 is currently a simple tool that does one thing at a time only. In hbase-2.x, the Master is the final arbiter of all state, so a general principle for most HBCK2 commands is that it asks the Master to affect all repair. A Master must be up and running, before you can run HBCK2 commands. While HBCK1 performed analysis reporting your cluster GOOD or BAD, HBCK2 is less presumptuous. In hbase-2.x, the operator figures what needs fixing and then uses tooling including HBCK2 to do fixup.
++
+## HBCK2 vs HBCK1
+
+HBCK2 is the successor to HBCK, the repair tool that shipped with hbase-1.x (A.K.A HBCK1). Use HBCK2 in place of HBCK1 making repairs against hbase-2.x clusters. HBCK1 shouldn't be run against a hbase-2.x install. It may do damage. Its write-facility (-fix) has been removed. It can report on the state of a hbase-2.x cluster but its assessments are inaccurate since it doesn't understand the internal workings of a hbase-2.x. HBCK2 doesn't work the way HBCK1 used to, even for the case where commands are similarly named across the two versions.
+
+## Obtaining HBCK2
+
+You can find the release under the HBase distribution directory. See the [HBASE Downloads Page](https://dlcdn.apache.org/hbase/hbase-operator-tools-1.2.0/hbase-operator-tools-1.2.0-bin.tar.gz).
++
+### Master UI: The HBCK Report
+
+An HBCK Report page added to the Master in 2.1.6 at `/hbck.jsp`, which shows output from two inspections run by the master on an interval. One is the output by the `CatalogJanitor` whenever it runs. If overlaps or holes in, `hbase:meta`, the `CatalogJanitor` lists what it has found. Another background 'chore' process added to compare `hbase:meta` and filesystem content; if any anomaly, it makes note in its HBCK Report section.
+
+To run the CatalogJanitor, execute the command in hbase shell: `catalogjanitor_run`
+
+To run hbck chore, execute the command in hbase shell: `hbck_chore_run`
+
+Both commands don't take any inputs.
+
+## Running HBCK2
+
+We can run the hbck command by launching it via the $HBASE_HOME/bin/hbase script. By default, running bin/hbase hbck, the built-in HBCK1 tooling is run. To run HBCK2, you need to point at a built HBCK2 jar using the -j option as in:
+`hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar`
+
+This command with no options or arguments passed prints the HBCK2 help.
+
+## HBCK2 Commands
+
+> [!NOTE]
+> Test these commands on a test cluster to understand the functionality before running in production environment
+
+**assigns [OPTIONS] <ENCODED_REGIONNAME/INPUTFILES_FOR_REGIONNAMES>... | -i <INPUT_FILE>...**
+
+**Options:**
+
+`-o,--override` - override ownership by another procedure
+
+`-i,--inputFiles` - takes one or more encoded region names
+
+A 'raw' assign that can be used even during Master initialization (if the -skip flag is specified). Skirts Coprocessors. Pass one or more encoded region names. de00010733901a05f5a2a3a382e27dd4 is an example of what a user-space encoded region name looks like. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar assigns de00010733901a05f5a2a3a382e27dd4
+```
+Returns the PID(s) of the created AssignProcedure(s) or -1 if none. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains encoded region names, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar assigns -i fileName1 fileName2
+```
+
+**unassigns [OPTIONS] <ENCODED_REGIONNAME>...| -i <INPUT_FILE>...**
+
+**Options:**
+
+`-o,--override` - override ownership by another procedure
+
+`-i,--inputFiles` - takes ones or more input files of encoded names
+
+A 'raw' unassign that can be used even during Master initialization (if the -skip flag is specified). Skirts Coprocessors. Pass one or more encoded region names. de00010733901a05f5a2a3a382e27dd4 is an example of what a user override space encoded region name looks like. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar unassign de00010733901a05f5a2a3a382e27dd4
+```
+Returns the PID(s) of the created UnassignProcedure(s) or -1 if none. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains encoded region names, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar unassigns fileName1 -i fileName2
+```
+
+`bypass [OPTIONS] <PID>...`
+
+**Options:**
+
+`-o,--override` - override if procedure is running/stuck
+
+`-r,--recursive` - bypass parent and its children. SLOW! EXPENSIVE!
+
+`-w,--lockWait` - milliseconds to wait before giving up; default=1
+
+`-i,--inputFiles` - takes one or more input files of PIDs
+
+Pass one (or more) procedure 'PIDs to skip to procedure finish. Parent of bypassed procedure skips to the finish. Entities are left in an inconsistent state and require manual fixup May need Master restart to clear locks still held. Bypass fails if procedure has children. Add 'recursive' if all you have is a parent PID to finish parent and children. *This is SLOW, and dangerous so use selectively. Doesn't always work*.
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar bypass <PID>
+```
+If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains PIDs, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar bypass -i fileName1 fileName2
+```
+
+`reportMissingRegionsInMeta <NAMESPACE|NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...`
+
+**Options:**
+
+`i,--inputFiles` takes one or more input files of namespace or table names
+
+To be used when regions missing from `hbase:meta` but directories are present still in HDFS. This command is an only a check method, designed for reporting purposes and doesn't perform any fixes, providing a view of which regions (if any) would get readded to `hbase:meta`, grouped by respective table/namespace. To effectively readd regions in meta, run addFsRegionsMissingInMeta. This command needs `hbase:meta` to be online. For each namespace/table passed as parameter, it performs a diff between regions available in `hbase:meta` against existing regions dirs on HDFS. Region dirs with no matches are printed grouped under its related table name. Tables with no missing regions show a 'no missing regions' message. If no namespace or table is specified, it verifies all existing regions. It accepts a combination of multiple namespace and tables. Table names should include the namespace portion, even for tables in the default namespace, otherwise it assumes as a namespace value. An example triggering missing regions report for tables 'table_1' and 'table_2', under default namespace:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar reportMissingRegionsInMeta default:table_1 default:table_2
+```
+An example triggering missing regions report for table 'table_1' under default namespace, and for all tables from namespace 'ns1':
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar reportMissingRegionsInMeta default:table_1 ns1
+```
+Returns list of missing regions for each table passed as parameter, or for each table on namespaces specified as parameter. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar reportMissingRegionsInMeta -i fileName1 fileName2
+```
+
+`addFsRegionsMissingInMeta <NAMESPACE|NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...`
+
+**Options**
+
+`-i,--inputFiles` takes one or more input files of namespace of table names to be used when regions missing from `hbase:meta` but directories are present still in HDFS. **Needs `hbase:meta` to be online**. For each table name passed as parameter, performs diff between regions available in `hbase:meta` and region dirs on HDFS. Then for dirs with no `hbase:meta` matches, it reads the 'regioninfo' metadata file and re-creates given region in `hbase:meta`. Regions are re-created in 'CLOSED' state in the `hbase:meta` table, but not in the Masters' cache, and they aren't assigned either. To get these regions online, run the HBCK2 'assigns' command printed when this command-run completes.
+
+> [!NOTE]
+> If using hbase releases older than 2.3.0, a rolling restart of HMasters is needed prior to executing the set of 'assigns' output. An example adding missing regions for tables 'tbl_1' in the default namespace, 'tbl_2' in namespace 'n1' and for all tables from namespace 'n2':
+
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar addFsRegionsMissingInMeta default:tbl_1 n1:tbl_2 n2
+```
+Returns HBCK2 an 'assigns' command with all reinserted regions. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar addFsRegionsMissingInMeta -i fileName1 fileName2
+```
+
+`extraRegionsInMeta <NAMESPACE|NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...`
+
+**Options**
+
+`-f, --fix`- fix meta by removing all extra regions found.
+
+`-i,--inputFiles`- take one or more input files of namespace or table names
+
+Reports regions present on `hbase:meta`, but with no related directories on the file system. Needs `hbase:meta` to be online. For each table name passed as parameter, performs diff between regions available in `hbase:meta` and region dirs on the given file system. Extra regions would get deleted from Meta if passed the --fix option.
+
+> [!NOTE]
+> Before deciding on use the "--fix" option, it's worth check if reported extra regions are overlapping with existing valid regions. If so, then `extraRegionsInMeta --fix` is indeed the optimal solution. Otherwise, "assigns" command is the simpler solution, as it recreates regions dirs in the filesystem, if not existing.
+
+An example triggering extra regions report for table 'table_1' under default namespace, and for all tables from namespace 'ns1':
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar extraRegionsInMeta default:table_1 ns1
+```
+An example triggering extra regions report for table 'table_1' under default namespace, and for all tables from namespace 'ns1' with the fix option:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar extraRegionsInMeta -f default:table_1 ns1
+```
+Returns list of extra regions for each table passed as parameter, or for each table on namespaces specified as parameter. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar extraRegionsInMeta -i fileName1 fileName2
+```
+
+**fixMeta**
+
+> [!NOTE]
+> This doesn't work well with HBase 2.1.6. Not recommended to be used on a 2.1.6 HBase Cluster.
+
+Do a server-side fix of bad or inconsistent state in `hbase:meta`. Master UI has matching, new 'HBCK Report' tab that dumps reports generated by most recent run of catalogjanitor and a new 'HBCK Chore'. **It's critical that `hbase:meta` first be made healthy before making any other repairs**. Fixes 'holes', 'overlaps', etc., creating (empty) region directories in HDFS to match regions added to `hbase:meta`. **Command isn't the same as the old _hbck1_ command named similarly**. Works against the reports generated by the last catalog_janitor and hbck chore runs. If nothing to fix, run is a loop. Otherwise, if 'HBCK Report' UI reports problems, a run of fixMeta clears up`hbase:meta` issues.
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar fixMeta
+```
+
+`generateMissingTableDescriptorFile <NAMESPACE:TABLENAME>`
+
+Trying to fix an orphan table by generating a missing table descriptor file. This command has no effect if the table folder is missing or if the `.tableinfo` is present (we don't override existing table descriptors). This command first checks if the TableDescriptor is cached in HBase Master in which case it recovers the `.tableinfo` accordingly. If TableDescriptor isn't cached in master, then it creates a default `.tableinfo` file with the following items:
+- the table name
+- the column family list determined based on the file system
+- the default properties for both TableDescriptor and `ColumnFamilyDescriptors`
+If the `.tableinfo` file was generated using default parameters then make sure you check the table / column family properties later (and change them if needed). This method doesn't change anything in HBase, only writes the new `.tableinfo` file to the file system. Orphan tables, for example, ServerCrashProcedures to stick, you might need to fix the error still after you generated the missing table info files.
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar generateMissingTableDescriptorFile namespace:table_name
+```
+
+`replication [OPTIONS] [<NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...]`
+
+**Options**
+
+`-f, --fix` - fix any replication issues found.
+
+`-i,--inputFiles` - take one or more input files of table names
+
+Looks for undeleted replication queues and deletes them if passed the '--fix' option. Pass a table name to check for replication barrier and purge if '--fix'.
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar replication namespace:table_name
+```
+If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<TABLENAME>`, one per line. For example:
+
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar replication -i fileName1 fileName2
+```
+
+`setRegionState [<ENCODED_REGIONNAME> <STATE> | -i <INPUT_FILE>...]`
+
+**Options**
+
+`-i,--inputFiles` take one or more input files of encoded region names and states
+
+**Possible region states:**
+
+* OFFLINE
+* OPENING
+* OPEN
+* CLOSIN
+* CLOSED
+* SPLITTING
+* SPLIT
+* FAILED_OPEN
+* FAILED_CLOSE
+* MERGING
+* MERGED
+* SPLITTING_NEW
+* MERGING_NEW
+* ABNORMALLY_CLOSED
+
+> [!WARNING]
+> This is a very risky option intended for use as last resort.
+
+Example scenarios include unassigns/assigns that can't move forward because region is in an inconsistent state in 'hbase:meta'. For example, the 'unassigns' command can only proceed if passed a region in one of the following states: **SPLITTING|SPLIT|MERGING|OPEN|CLOSING**.
+
+ Before manually setting a region state with this command, certify that this region not handled by a running procedure, such as 'assign' or 'split'. You can get a view of running procedures in the hbase shell using the 'list_procedures' command. An example
+setting region 'de00010733901a05f5a2a3a382e27dd4' to CLOSING:
+
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setRegionState de00010733901a05f5a2a3a382e27dd4 CLOSING
+```
+Returns "0" if region state changed and "1" otherwise.
+If `-i or --inputFiles` is specified, pass one or more input file names.
+Each file contains `<ENCODED_REGIONNAME> <STATE>` one pair per line.
+For example,
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setRegionState -i fileName1 fileName2
+```
+
+`setTableState [<TABLENAME> <STATE> | -i <INPUT_FILE>...]`
+
+**Options**
+
+`-i,--inputFiles` take one or more input files of table names and states
+
+Possible table states: **ENABLED, DISABLED, DISABLING, ENABLING**.
+
+To read current table state, in the hbase shell run:
+
+```
+hbase> get 'hbase:meta', '<TABLENAME>', 'table:state'
+```
+A value of x08x00 == ENABLED, x08x01 == DISABLED, etc.
+Can also run a 'describe `<TABLENAME>` at the shell prompt. An example making table name user ENABLED:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setTableState users ENABLED
+```
+Returns whatever the previous table state was. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<TABLENAME> <STATE>`, one pair per line.
+For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setTableState -i fileName1 fileName2
+```
+
+`scheduleRecoveries <SERVERNAME>... | -i <INPUT_FILE>...`
+
+**Options**
+
+`-i,--inputFiles` take one or more input files of server names
+
+Schedule `ServerCrashProcedure(SCP)` for list of `RegionServers`. Format server name as `<HOSTNAME>,<PORT>,<STARTCODE>` (See HBase UI/logs).
+
+Example using RegionServer 'a.example.org, 29100,1540348649479'
+
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar scheduleRecoveries a.example.org,29100,1540348649479
+```
+Returns the PID(s) of the created ServerCrashProcedure(s) or -1 if no procedure created (see master logs for why not).
+Command support added in hbase versions 2.0.3, 2.1.2, 2.2.0 or newer. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<SERVERNAME>`, one per line. For example:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar scheduleRecoveries -i fileName1 fileName2
+
+```
+## Fixing Problems
+
+### Some General Principals
+When making repair, **make sure `hbase:meta` is consistent first before you go about fixing any other issue type** such as a filesystem deviance. Deviance in the filesystem or problems with assign should be addressed after the `hbase:meta` has been put in order. If `hbase:meta` has issues, the Master can't make proper placements when adopting orphan filesystem data or making region assignments.
+
+Other general principals to keep in mind include a Region can't be assigned if it's in CLOSING state (or the inverse, unassigned if in OPENING state) without first transitioning via CLOSED: Regions must always move from CLOSED, to OPENING, to OPEN, and then to CLOSING, CLOSED.
+
+When making repair, do fixup of a table-at-a-time.
+
+If a table is DISABLED, you cant' assign a Region. In the Master logs, you see that the Master reports skipped because the table is DISABLED. You can assign a Region because, currently in the OPENING state and you want it in the CLOSED state so it agrees with the table's DISABLED state. In this situation, you may have to temporarily set the table status to ENABLED, so you can do the assign, and then set it back again after the unassign statement. HBCK2 has facility to allow you to do this change. See the HBCK2 usage output.
+
+### Assigning/Unassigning
+
+Generally, on assign, the Master persists until successful. An assign takes an exclusive lock on the Region. This precludes a concurrent assign or unassign from running. An assign against a locked Region waits until the lock is released before making progress. See the [Procedures & Locks] section for current list of outstanding Locks.
+
+**Master startup cannot progress, in holding-pattern until region online**
+
+```
+2018-10-01 22:07:42,792 WARN org.apache.hadoop.hbase.master.HMaster: hbase:meta,1.1588230740 isn't online; state={1588230740 state=CLOSING, ts=1538456302300, server=ve1017.example.org,22101,1538449648131}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region online.
+```
+The Master is unable to continue startup because there's no Procedure to assign `hbase:meta` (or `hbase:namespace`). To inject one, use the HBCK2 tool:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar assigns -skip 1588230740
+```
+where **1588230740 is the encoded name of the `hbase:meta` Region**. Pass the '-skip' option to stop HBCK2 doing a version check against the remote master. If the remote master isn't up, the version check prompts a 'Master is initializing response', or 'PleaseHoldException' and drop the assign attempt. The '-skip' command avoid the version check and lands the scheduled assign.
+
+The same may happen to the `hbase:namespace` system table. Look for the encoded Region name of the `hbase:namespace` Region and do similar to what we did for `hbase:meta`. In this latter case, the Master actually prints a helpful message that looks like
+
+```
+2019-07-09 22:08:38,966 WARN [master/localhost:16000:becomeActiveMaster] master.HMaster: hbase:namespace,,1562733904278.9559cf72b8e81e1291c626a8e781a6ae. isn't online; state={9559cf72b8e81e1291c626a8e781a6ae state=CLOSED, ts=1562735318897, server=null}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined.
+```
+To schedule an assign for the hbase:namespace table noted in the above log line, you would do:
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar -skip assigns 9559cf72b8e81e1291c626a8e781a6ae
+```
+passing the encoded name for the namespace region (the encoded name differs per deploy).
+
+### Missing Regions in `hbase:meta` region/table restore/rebuild
+There have been some unusual cases where table regions have been removed from `hbase:meta` table. Some triage on such cases revealed these were operator-induced. Users would have run the obsolete hbck1 OfflineMetaRepair tool against an HBCK2 cluster. OfflineMetaRepair is a well known tool for fixing `hbase:meta` table related issues on HBase 1.x versions. The original version isn't compatible with HBase 2.x or higher versions, and it has undergone some adjustments so in the extreme, it can now be run via HBCK2.
+
+In most of these cases, regions end up missing in `hbase:meta` at random, but hbase may still be operational. In such situations, problem can be addressed with the Master online, using the addFsRegionsMissingInMeta command in HBCK2. This command is less disruptive to hbase than a full `hbase:meta` rebuild covered later, and it can be used even for recovering the namespace table region.
+
+### Extra Regions in `hbase:meta` region/table restore/rebuild
+There can also be situations where table regions have been removed in file system, but still have related entries on `hbase:meta` table. This may happen due to problems on splitting, manual operation mistakes (like deleting/moving the region dir manually), or even meta info data loss issues such as HBASE-21843.
+
+Such problem can be addressed with the Master online, using the **extraRegionsInMeta --fix** command in HBCK2. This command is less disruptive to hbase than a full `hbase:meta` rebuild covered later. Also useful when this happens on versions that don't support fixMeta hbck2 option (any prior to "2.0.6", "2.1.6", "2.2.1", "2.3.0","3.0.0").
+
+### Online `hbase:meta` rebuild recipe
+If `hbase:meta` corruption isn't too critical, hbase would still be able to bring it online. Even if namespace region is among the missing regions, it's possible to scan `hbase:meta` during the initialization period, where Master is waiting for namespace to be assigned. To verify this situation, a` hbase:meta` scan command can be executed. If it doesn't time out or shows any errors, the `hbase:meta` is online:
+```
+echo "scan 'hbase:meta', {COLUMN=>'info:regioninfo'}" | hbase shell
+```
+HBCK2 **addFsRegionsMissingInMeta** can be used if the message doesn't show any errors. It reads region metadata info available on the FS region directories in order to recreate regions in `hbase:meta`. Since it can run with hbase partially operational, it attempts to disable online tables that are affected the reported problem and it's going to readd regions to `hbase:meta`. It can check for specific tables/namespaces, or all tables from all namespaces. An example shows adding missing regions for tables 'tbl_1' in the default namespace, 'tbl_2' in namespace 'n1', and for all tables from namespace 'n2':
+```
+hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar addFsRegionsMissingInMeta default:tbl_1 n1:tbl_2 n2
+```
+As it operates independently from Master, once it finishes successfully, more steps are required to actually have the readded regions assigned. These messages are listed as
+
+**addFsRegionsMissingInMeta** outputs an assigns command with all regions that got readded. This command needs to be executed later, so copy and save it for convenience.
+
+**For HBase versions prior to 2.3.0, after addFsRegionsMissingInMeta finished successfully and output has been saved, restart all running HBase Masters.**
+
+Once Master's are restarted and `hbase:meta` is already online (check if Web UI is accessible), run assigns command from addFsRegionsMissingInMeta output saved earlier.
+
+> [!NOTE]
+> If namespace region is among the missing regions, you will need to add --skip flag at the beginning of assigns command returned.
+
+Should a cluster suffer a catastrophic loss of the `hbase:meta` table, a rough rebuild is possible using the following recipe. In outline, we stop the cluster. Run the HBCK2 OfflineMetaRepair tool, which reads directories and metadata dropped into the filesystem makes the best effort at reconstructing a viable `hbase:met` table; restart your cluster. Inject an assign to bring the system namespace table online; and then finally, reassign user space tables you'd like enabled (the rebuilt `hbase:meta` creates a table with all tables offline and no regions assigned).
+
+### Detailed rebuild recipe
+
+> [!NOTE]
+> Use it only as a last resort. Not recommended.
+
+* Stop the cluster.
+
+* Run the rebuild `hbase:meta` command from HBCK2. This moves aside the original `hbase:meta` and puts in place a newly rebuilt one. As an example of how to run the tool. It adds the -details flag so the tool dumps info on the regions its found in hdfs:
+ ```
+ hbase --config /etc/hbase/conf -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar org.apache.hbase.hbck1.OfflineMetaRepair -details
+ ```
+* Start up the cluster. It won't up fully. It's stuck because the namespace table isn't online and there's no assign procedure in the procedure store for this contingency. The hbase master log shows this state. Here's an example of what it logs:
+ ```
+ 2019-07-10 18:30:51,090 WARN [master/localhost:16000:becomeActiveMaster] master.HMaster: hbase:namespace,,1562808216225.725a0fe6c2c869d3d0a9ed82bfa80fa3. isn't online; state={725a0fe6c2c869d3d0a9ed82bfa80fa3 state=CLOSED, ts=1562808619952, server=null}; ServerCrashProcedures=false. Master startup can't progress, in holding-pattern until region onlined.
+ ```
+ To assign the namespace table region, you can't use the shell. If you use the shell, it fails with a PleaseHoldException because the master isn't yet up (it's waiting for the namespace table to come online before it declares itself ΓÇÿupΓÇÖ). You have to use the HBCK2 assigns command. To assign, you need the namespace encoded name. It shows in the log quoted. That is, 725a0fe6c2c869d3d0a9ed82bfa80fa3 in this case. You have to pass the -skip command to ΓÇÿskipΓÇÖ the master version check (without it, your HBCK2 invocation elicits the PleaseHoldException because the master isn't yet up). Here's an example adding an assign of the namespace table:
+ ```
+ hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar -skip assigns 725a0fe6c2c869d3d0a9ed82bfa80fa3
+ ```
+ If the invocation comes back with ΓÇÿConnection refusedΓÇÖ, is the Master up? The Master will shut down after a while if it canΓÇÖt initialize itself. Just restart the cluster/master and rerun the assigns command.
+
+* When the assigns run successfully, you see it emit the likes of the following. The ‘48’ on the end is the PID of the assign procedure schedule. If the PID returned is ‘-1’, then the master startup hasn't progressed sufficently… retry. Or, the encoded regionname is incorrect. Check.
+ ```
+ hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar -skip assigns 725a0fe6c2c869d3d0a9ed82bfa80fa3
+ ```
+ ```
+ 18:40:43.817 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
+ 18:40:44.315 [main] INFO org.apache.hbase.HBCK2 - hbck sufpport check skipped
+ [48]
+ ```
+* Check the master logs. The master should have come up. You see successful completion of PID=48. Look for a line like this to verify successful master launch:
+ ```
+ master.HMaster: Master has completed initialization 132.515sec
+ ```
+ It might take a while to appear.
+
+ The rebuild of `hbase:meta` adds the user tables in DISABLED state and the regions in CLOSED mode. Re-enable tables via the shell to bring all table regions back online. Do it one-at-a-time or see the enable all ".*" command to enable all tables in one shot.
+
+ The rebuild meta is missing edits and may need subsequent repair and cleaning using facility outlined higher up in this TSG.
+
+### Dropped reference files, missing hbase.version file, and corrupted files
+
+HBCK2 can check for hanging references and corrupt files. You can ask it to sideline bad files, which may be needed to get over humps where regions won't online or reads are failing. See the filesystem command in the HBCK2 listing. Pass one or more tablename (or 'none' to check all tables). It reports bad files. Pass the `--fix` option to effect repairs.
+
+### Procedure Start-over
+
+At an extreme, as a last resource, if the Master is distraught and all attempts at fixup only turn up undoable locks or Procedures that can't finish, and/or the set of MasterProcWALs is growing without bound. It's possible to wipe the Master state clean. Just move aside the `/hbase/MasterProcWALs/` directory under your HBase install and restart the Master process. It comes back as a tabular format without memory.
+
+If at the time of the erasure, all Regions were happily assigned or off lined, then on Master restart, the Master should pick up and continue as though nothing happened. But if there were Regions-In-Transition at the time, then the operator has to intervene to bring outstanding assigns/unassigns to their terminal point. Read the `hbase:meta` `info:state` columns as described to figure what needs assigning/unassigning. Having erased all history moving aside the MasterProcWALs, none of the entities should be locked so you 'Improved free to bulk assign/unassign.
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
Previously updated : 10/20/2022 Last updated : 05/05/2023 # Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
The new and old HDInsight clusters must have access to the same Storage Accounts
Migration of Hive tables to a new Storage Account needs to be done as a separate step. See [Hive Migration across Storage Accounts](./hive-migration-across-storage-accounts.md). +
+## Changes in Hive 3 and what's new:
+
+### Hive client changes
+Hive 3 supports only the thin client, Beeline for running queries and Hive administrative commands from the command line. Beeline uses a JDBC connection to HiveServer to execute all commands. Parsing, compiling, and executing operations occur in HiveServer.
+
+You enter supported Hive CLI commands by invoking Beeline using the Hive keyword as a Hive user or invoke a beeline using `beeline -u <JDBC URL>`. You can get the JDBC URL from Ambari Hive page.
++
+Use Beeline (instead of the thick client Hive CLI, which is no longer supported) has several advantages, includes:
+
+* Instead of maintaining the entire Hive code base, you can maintain only the JDBC client.
+* Startup overhead is lower by using Beeline because the entire Hive code base isn't involved.
+
+You can also execute the Hive script, which is under the directory ΓÇ£/usr/binΓÇ¥, which invokes a beeline connection using JDBC URL.
++
+A thin client architecture facilitates securing data in
+
+* Session state, internal data structures, passwords, and so on, reside on the client instead of the server.
+* The small number of daemons required to execute queries simplifies monitoring and debugging.
+
+HiveServer enforces allowlist and blocklist settings that you can change using `SET` commands. Using the blocklists, you can restrict memory configuration to prevent Hive Server instability. You can configure multiple HiveServer instances with different allowlist and blocklist to establish different levels of stability.
+
+### Hive Metastore changes
+
+Hive now supports only a remote metastore instead of an embedded metastore (within HS2 JVM). The Hive metastore resides on a node in a cluster managed by Ambari as part of the HDInsight stack. A standalone server outside the cluster isn't supported. You no longer set key=value commands on the command line to configure Hive Metastore. Based on the value configured in "hive.metastore.uris=' ' " HMS service used and connection established.
+
+#### Execution engine change
+
+Apache Tez replaces MapReduce as the default Hive execution engine. MapReduce is deprecated starting Hive 2.0 Refer [HIVE-12300](https://issues.apache.org/jira/browse/HIVE-12300). With expressions of directed acyclic graphs (DAGs) and data transfer primitives, execution of Hive queries under Tez improves performance. SQL queries you submit to Hive are executed as follows
+
+1. Hive compiles the query.
+1. Tez executes the query.
+1. YARN allocates resources for applications across the cluster and enables authorization for Hive jobs in YARN queues.
+1. Hive updates the data in ABFS or WASB.
+1. Hive returns query results over a JDBC connection.
+
+If a legacy script or application specifies MapReduce for execution, an exception occurs as follows
++
+> [!NOTE]
+> Most user-defined functions (UDFs) require no change to execute on Tez instead of MapReduce.
+
+**Changes with respect to ACID transaction and CBO:**
+
+* ACID tables are the default table type in HDInsight 4.x with no performance or operational overload.
+* Simplified application development, operations with stronger transactional guarantees, and simpler semantics for SQL commands
+* Hive internal takes care of bucketing for ACID tables in HDInsight 4.1, thus removing maintenance overhead.
+* Advanced optimizations ΓÇô Upgrade in CBO
+* Automatic Query cache. The Property used to enable query caching is `hive.query.results.cache.enabled`. You need to set this property to true. Hive stores the query result cache in `/tmp/hive/__resultcache__/.` By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes `hive.query.results.cache.max.size`.
+
+ For more information, [Benefits of migrating to Azure HDInsight 4.0.](../benefits-of-migrating-to-hdinsight-40.md)
+
+**Materialized view rewrites**
+
+ For more information, on [Hive - Materialized Views](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/hive-materialized-views/ba-p/2502785)
+
+## Changes after upgrading to Apache Hive 3
+To locate and use your Apache Hive 3 tables after an upgrade, you need to understand the changes that occur during the upgrade process. Changes to the management and location of tables, permissions to table directories, table types, and ACID-compliance concerns.
+
+### Hive Management of Tables
+Hive 3 takes more control of tables than Hive 2, and requires managed tables adhere to a strict definition. The level of control Hive takes over tables is homogeneous to the traditional databases. Hive is self-aware of the delta changes to the data; this control framework enhances the performance.
+
+For example, if Hive knows that resolving a query doesn't require scanning tables for new data, Hive returns results from the hive query result cache.
+When the underlying data in a materialized view change, Hive needs to rebuild the materialized view. ACID properties reveal exactly which rows changed, and needs to be processed and added to the materialized view.
+
+### Hive changes to ACID properties
+
+Hive 2.x and 3.x have both transactional(managed) and nontransactional (external) tables. Transactional tables have atomic, consistent, isolation and durable (ACID) properties. In Hive 2.x, the initial version of ACID transaction processing is ACID v1. In Hive 3.x, the default tables would be with ACID v2.
+
+### Native and non-native storage formats
+
+Storage formats are a factor in upgrade changes to table types. Hive 2.x and 3.x supports the following Hadoop native and non-native storage formats
+
+**Native:** Tables with built-in support in Hive, in the following file formats
+* Text
+* Sequence File
+* RC File
+* AVRO File
+* ORC File
+* Parquet File
+
+**Non-native:** Tables that use a storage handler, such as the DruidStorageHandler or HBaseStorageHandler
+
+## HDInsight 4.x upgrade changes to table types
+
+The following table compares Hive table types and ACID operations before an upgrade from HDInsight 3.x and after an upgrade to HDInsight 4.x. The ownership of the Hive table file is a factor in determining table types and ACID operations after the upgrade
+
+### HDInsight 3.x and HDInsight 4.x Table type comparison
+
+|**HDInsight 3.x**| | | |**HDInsight 4.x**| |
+|-|-|-|-|-|-|
+|**Table Type** |**ACID v1** |**Format** |**Owner (user) of Hive Table File** |**Table Type**|**ACID v2**|
+|External |No |Native or non-native| Hive or non-Hive |External |No|
+|Managed |Yes |ORC |Hive or non-Hive| Managed, updatable |Yes|
+|Managed |No |ORC |Hive| Managed, updatable |Yes|
+|Managed|No|ORC|non-Hive |External, with data delete |NO|
+|Managed |No |Native (but non-ORC)| Hive |Managed, insert only |Yes|
+|Managed|No|Native (but non-ORC)|non-Hive |External, with data delete |No|
+|Managed |No |Non-native| Hive or non-Hive| External, with data delete| No|
+
+## Hive Impersonation
+
+Hive impersonation was enabled by default in Hive 2 (doAs=true), and disabled by default in Hive 3. Hive impersonation runs Hive as end user, or not.
+
+### Other HDInsight 4.x upgrade changes
+
+1. Managed, ACID tables not owned by the Hive user remain managed tables after the upgrade, but Hive becomes the owner.
+1. After the upgrade, the format of a Hive table is the same as before the upgrade. For example, native or non-native tables remain native or non-native, respectively.
+
+## Location Changes
+
+After the upgrade, the location of managed tables or partitions doesn't change under any one of the following conditions:
+
+* The old table or partition directory wasn't in its default location /apps/hive/warehouse before the upgrade.
+* The old table or partition is in a different file system than the new warehouse directory.
+* The old table or partition directory is in a different encryption zone than the new warehouse directory.
+
+Otherwise, the location of managed tables or partitions does change. The upgrade process moves managed files to `/hive/warehouse/managed`. By default, Hive places any new external tables you create in HDInsight 4.x in `/hive/warehouse/external`
+
+The `/apps/hive directory`, which is the former location of the Hive 2.x warehouse, might or might not exist in HDInsight 4.x
+
+Following Scenario's are present for location changes
+
+**Scenario 1**
+
+If the table is a managed table in HDInsight-3.x and if it's present in the location `/apps/hive/warehouse` and converted as external table in HDInsight-4.x, then the location is the same `/apps/hive/warehouse` in HDInsight 4.x as well. It doesn't change any location. After this step, if you're performing alter table command to convert it as managed (acid) table at that time present in the same location `/apps/hive/warehouse`.
+
+**Scenario 2**
+
+If the table is a managed table in HDInsight-3.x and if it's present in the location `/apps/hive/warehouse` and converted to managed (ACID) table in HDInsight 4.x, then the location is `/hive/warehouse/managed`.
+
+**Scenario 3**
+If you're creating an external table, in HDInsight-4.x without specifying any location then it presents in the location `/hive/warehouse/external`.
+
+## Table conversion
+
+After upgrading, to convert a nontransactional table to an ACID v2 transactional table, you use the `ALTER TABLE` command and set table properties to
+```
+transaction'='true' and 'EXTERNAL'='false
+```
+* The managed table, non-ACID, ORC format and owned by non-Hive user in HDInsight-3.x will be converted to external, non-ACID table in HDInsight-4.x.
+* If the user wishes to change the external table (non-ACID) to ACID, then they should change the external table to managed and ACID as well. Because in HDInsight-4.x all the managed tables are strictly ACID by default. You can't convert the external tables(non-ACID) to ACID table.
+
+> [!NOTE]
+> The table must be a ORC table.
+
+To convert external table (non-ACID) to Managed (ACID) table,
+
+1. Convert external table to managed and acid equals to true using the following command:
+ ```
+ alter table <table name> set TBLPROPERTIES ('EXTERNAL'='false', 'transactional'='true');
+ ```
+1. If you try to execute the following command for external table, you get the below error.
+
+**Scenario 1**
+
+Consider table rt is external table (non-ACID). If the table is non-ORC table,
+
+```
+alter table rt set TBLPROPERTIES ('transactional'='true');
+ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The table must be stored using an ACID compliant format (such as ORC): work.rt
+The table must be ORC format
+```
+
+**Scenario 2**
+
+```
+>>>> alter table rt set TBLPROPERTIES ('transactional'='true'); If the table is ORC table.
+ERROR:
+Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. work.rt can't be declared transactional because it's an external table (state=08S01,code=1)
+```
+
+This error is occurring because the table rt is external table and you can't convert external table to ACID.
+
+**Scenario 3**
+
+```
+>>>> alter table rt set TBLPROPERTIES ('EXTERNAL'='false');
+ERROR:
+Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Table work.rt failed strict managed table checks due to the following reason: Table is marked as a managed table but isn't transactional. (state=08S01,code=1)
+```
+
+Here we're trying to change the external table first to managed table. In HDInsight 4.x, it should be Strictly managed table (which means it should be ACID).
+So, here you get a deadlock. The only way to convert the external table(NON_ACID) to managed (ACID) you have to follow the command:
+
+```
+alter table rt set TBLPROPERTIES ('EXTERNAL'='false', 'transactional'='true');
+```
+
+## Syntax and semantics
+
+* Creating a table
+To improve useability and functionality, Hive 3 changed table creation.
+Hive has changed table creation in the following ways
+ * Creates ACID-compliant table, which is the default in HDP
+ * Supports simple writes and inserts
+ * Writes to multiple partitions
+ * Inserts multiple data updates in a single SELECT statement
+ * Eliminates the need for bucketing.
+
+ If you have an ETL pipeline that creates tables in Hive, the tables create as ACID. Hive now tightly controls access and performs compaction periodically on the tables
+
+ **Before Upgrade**
+ In HDInsight 3.x, by default CREATE TABLE created a non-ACID table.
+
+ **After Upgrade** By default CREATE TABLE creates a full, ACID transactional table in ORC format.
+
+ **Action Required**
+ To access Hive ACID tables from Spark, you connect to Hive using the Hive Warehouse Connector (HWC). To write ACID tables to Hive from Spark, you use the HWC and HWC API
+
+* Escaping `db.table` References
+
+ You need to change queries that use db.table references to prevent Hive from interpreting the entire db.table string as the table name.
+ Hive 3.x rejects `db.table` in SQL queries. A dot (.) isn't allowed in table names. You enclose the database name and the table name in backticks.
+ Find a table having the problematic table reference.
+ `math.students` that appears in a CREATE TABLE statement.
+ Enclose the database name and the table name in backticks.
+ `CREATE TABLE `math`.`students` (name VARCHAR(64), age INT, gpa DECIMAL(3,2));`
+
+* CASTING TIMESTAMPS
+ Results of applications that cast numerics to timestamps differ from Hive 2 to Hive 3. Apache Hive changed the behavior of CAST to comply with the SQL Standard, which doesn't associate a time zone with the TIMESTAMP type.
+
+ **Before Upgrade**
+ Casting a numeric type value into a timestamp could be used to produce a result that reflected the time zone of the cluster. For example, 1597217764557 is 2020-08-12 00:36:04 PDT. Running the following query casts the numeric to a timestamp in PDT:
+ `SELECT CAST(1597217764557 AS TIMESTAMP);`
+ | 2020-08-12 00:36:04 |
+
+ **After Upgrade**
+ Casting a numeric type value into a timestamp produces a result that reflects the UTC instead of the time zone of the cluster. Running the query casts the numeric to a timestamp in UTC.
+ `SELECT CAST(1597217764557 AS TIMESTAMP);`
+ | 2020-08-12 07:36:04.557 |
+
+ **Action Required**
+ Change applications. Don't cast from a numeral to obtain a local time zone. Built-in functions from_utc_timestamp and to_utc_timestamp can be used to mimic behavior before the upgrade.
+
+* CHECKING COMPATIBILITY OF COLUMN CHANGES
+ A default configuration change can cause applications that change column types to fail.
+
+ **Before Upgrade**
+ In HDInsight 3.x Hive.metastore.disallow.incompatible.col.type.changes is false by default to allow changes to incompatible column types. For example, you can change a STRING column to a column of an incompatible type, such as MAP<STRING, STRING>. No error occurs.
+
+ **After Upgrade**
+ The hive.metastore.disallow.incompatible.col.type.changes is true by default. Hive prevents changes to incompatible column types. Compatible column type changes, such as INT, STRING, BIGINT, aren't blocked.
+
+ **Action Required**
+ Change applications to disallow incompatible column type changes to prevent possible data corruption.
+
+* DROPPING PARTITIONS
+
+ The OFFLINE and NO_DROP keywords in the CASCADE clause for dropping partitions causes performance problems and is no longer supported.
+
+ **Before Upgrade**
+ You could use OFFLINE and NO_DROP keywords in the CASCADE clause to prevent partitions from being read or dropped.
+
+ **After Upgrade**
+ OFFLINE and NO_DROP aren't supported in the CASCADE clause.
+
+ **Action Required**
+ Change applications to remove OFFLINE and NO_DROP from the CASCADE clause. Use an authorization scheme, such as Ranger, to prevent partitions from being dropped or read.
+
+* RENAMING A TABLE
+ After the upgrade Renaming a managed table moves its location only if the table is created without a `LOCATION` clause and is under its database directory.
+
+## Limitations with respect to CBO
+
+* We see that the select output gives trailing zero's in few columns. For example, if we have a table column with datatype as decimal(38,4) and if we insert data as 38 then it adds the trailing zero's and provide result as 38.0000
+As per https://issues.apache.org/jira/browse/HIVE-12063 and https://issues.apache.org/jira/browse/HIVE-24389, the idea is retained the scale and precision instead of running a wrapper in decimal columns. This is the default behavior from Hive 2.
+To fix this issue, you can follow the below option.
+
+ 1. Modify the datatype at source level to adjust the precision as col1(decimal(38,0)). This value provides the result as 38 without trailing zero's. But if you insert the data as 35.0005 then it's .0005 and provides only the value as 38
+ 1.Remove the trailing zeros for the columns with issue and then cast to string,
+ 1. Use select TRIM(cast(<column_name> AS STRING))+0 FROM <table_name>;
+ 1. Use regex.
+
+1. Hive query fails with "Unsupported SubQuery Expression" when we use UNIX_TIMESTAMP in the query.
+ For example,
+ If we run a query, then it throws an error "Unsupported SubQuery Expression"
+ ```
+ select * from
+ (SELECT col_1 from table1 where col_2 >= unix_timestamp('2020-03-07','yyyy-MM-dd'));
+ ```
+ The root case of this issue is that the current Hive codebase is throwing an exception which parsing the UNIX_TIMESTAMP because there's no precision mapping in `HiveTypeSystemImpl.java code` for the precision of `UNIX_TIMESTAMP` which Calcite recognizes as `BIGINT`.
+ But the below query works fine
+ `select * from (SELECT col_1 from table1 where col_2 >= 1);`
+
+ This command executes successfully since col_2 is an integer.
+ The above issue was fixed in hdi-3.1.2-4.1.12(4.1 stack) and hdi-3.1.2-5.0.8(5.0 stack)
+ ## Steps to upgrade ### 1. Prepare the data
In certain situations when running a Hive query, you might receive `java.lang.Cl
``` The update command is to update the details manually in the backend DB and the alter command is used to alter the table with the new SerDe class from beeline or Hive.
+### Hive Backend DB schema compare Script
+
+You can run the following script after completing the migration.
+
+There's a chance of missing few columns in the backend DB, which causes the query failures. If the schema upgrade wasn't happened properly, then there's chance that we may hit the invalid column name issue. The below script fetches the column name and datatype from customer backend DB and provides the output if there's any missing column or incorrect datatype.
+
+The following path contains the schemacompare_final.py and test.csv file. The script is present in "schemacompare_final.py" file and the file "test.csv" contains all the column name and the datatype for all the tables, which should be present in the hive backend DB.
+
+https://hdiconfigactions2.blob.core.windows.net/hiveschemacompare/schemacompare_final.py
+
+https://hdiconfigactions2.blob.core.windows.net/hiveschemacompare/test.csv
+
+Download these two files from the link. And copy these files to one of the head nodes where hive service is running.
+
+**Steps to execute the script:**
+
+Create a directory called "schemacompare" under "/tmp" directory.
+
+Put the "schemacompare_final.py" and "test.csv" into the folder "/tmp/schemacompare". Do "ls -ltrh /tmp/schemacompare/" and verify whether the files are present.
+
+To execute the Python script, use the command "python schemacompare_final.py". This script starts executing the script and it takes less than five minutes to complete. The above script automatically connects to your backend DB and fetches the details from each and every table, which Hive uses and update the details in the new csv file called "return.csv". After after creating the file return.csv, it compares the data with the file "test.csv" and prints the column name or datatype if there's anything missing under the tablename.
+
+Once after executing the script you can see the following lines, which indicate that the details are fetched for the tables and the script is in progressing
+
+```
+KEY_CONSTRAINTS
+Details Fetched
+DELEGATION_TOKENS
+Details Fetched
+WRITE_SET
+Details Fetched
+SERDES
+Details Fetched
+```
+
+And you can see the difference details under "DIFFERENCE DETAILS:" line. If there's any difference, it prints
+
+```
+PART_COL_STATS;
+('difference', ['BIT_VECTOR', 'varbinary'])
+The line with semicolon PART_COL_STATS; is the table name. And under the table name you can find the differences as ('difference', ['BIT_VECTOR', 'varbinary']) if there are any difference in column or datatype.
+```
+
+If there are no differences in the table, then the output is
+
+```
+BUCKETING_COLS;
+('difference', [])
+PARTITIONS;
+('difference', [])
+```
+
+From this output, you can find the column names that are missing or incorrect. You can run the following query in your backend DB to verify once if the column is missing or not.
+
+`SELECT * FROM INFORMATION_SCHEMA.columns WHERE TABLE_NAME = 'PART_COL_STATS';`
+
+In case any of the columns is missed in the table, for example, if we run the queries like insert or insert overwrite then the stats will be calculated automatically and it tries to update the stats table like PART_COL_STATS and TAB_COL_STATS. And if the column like "BIT_VECTOR" is missing in the tables then it will fail with "Invalid column name" error. You can add the column as mentioned in the following commands. As a workaround you can disable the stats by setting the following properties, which can't update the stats in the backend Database.
+
+```
+hive.stats.autogather=false;
+hive.stats.column.autogather=false;
+To Fix this issue, run the following two queries on backend SQL server (Hive metastore DB):
+
+ALTER TABLE PART_COL_STATS ADD BIT_VECTOR VARBINARY(MAX);
+ALTER TABLE TAB_COL_STATS ADD BIT_VECTOR VARBINARY(MAX);
+```
+This step avoids the query failures, which fail with "Invalid column name" once after the migration.
+ ## Secure Hive across HDInsight versions HDInsight optionally integrates with Azure Active Directory using HDInsight Enterprise Security Package (ESP). ESP uses Kerberos and Apache Ranger to manage the permissions of specific resources within the cluster. Ranger policies deployed against Hive in HDInsight 3.6 can be migrated to HDInsight 4.0 with the following steps:
HDInsight optionally integrates with Azure Active Directory using HDInsight Ente
Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for other changes.
+## Post the migration
+
+Make sure to follow these steps after completing the migration.
+
+**Table Sanity**
+1. Recreate tables in Hive 3.1 using CTAS or IOW to change table type instead of changing table properties.
+1. Keep doAs as false.
+1. Ensure managed table/data ownership is with ΓÇ£hiveΓÇ¥ user.
+1. Use managed ACID tables if table format is ORC and managed non-ACID for non-ORC types.
+1. Regenerate stats on recreated tables as migration would have caused incorrect stats.
+
+**Cluster Health**
+
+If multiple clusters share the same storage and HMS DB, then we should enable auto-compaction/compaction threads only in one cluster and disable everywhere else.
+
+Tune Metastore to reduce their CPU usage.
+1. Disable transactional event listeners.
+ > [!NOTE]
+ > Perform the following steps, only if the hive replication feature not used.
+
+ 1. From Ambari UI, **remove the value for hive.metastore.transactional.event.listeners**.
+ 1. Default Value: `org.apache.hive.hcatalog.listener.DbNotificationListener`
+ 1. New value: `<Empty>`
+
+1. Disable Hive PrivilegeSynchronizer
+ 1. From Ambari UI, **set hive.privilege.synchronizer = false.**
+ 1. Default Value: `true`
+ 1. New value: `false`
+
+1. Optimize the partition repair feature
+ 1. Disable partition repair - This feature is used to synchronize the partitions of Hive tables in storage location with Hive metastore. You may disable this feature if ΓÇ£msck repairΓÇ¥ is used after the data ingestion.
+ 1. To disable the feature **add "discover.partitions=false"** under table properties using ALTER TABLE.
+ OR (if the feature can't be disabled)
+ 1. Increase the partition repair frequency.
+
+1. From Ambari UI, increase the value of ΓÇ£metastore.partition.management.task.frequencyΓÇ¥ (in seconds).
+ > [!NOTE]
+ > This change can delay the visibility of some of the partitions ingested into storage.
+
+ 1. Default Value: `60`
+ 1. Proposed value: `3600`
+1. Advanced Optimizations
+The following options need to be tested in a lower(non-prod) environment before applying tin production.
+ 1. Remove the Materialized view related listener if Materialized view isn't used.
+ 1. From Ambari UI, **add a custom property (in custom hive-site.xml) and remove the unwanted background metastore threads**.
+ 1. Property name: **metastore.task.threads.remote**
+ 1. Default Value: `N/A (it uses few class names internally)`
+ 1. New value:
+`org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService,org.apache.hadoop.hive.metastore.txn.AcidOpenTxnsCounterService,org.apache.hadoop.hive.metastore.txn.AcidCompactionHistoryService,org.apache.hadoop.hive.metastore.txn.AcidWriteSetService,org.apache.hadoop.hive.metastore.PartitionManagementTask`
+1. Disable the background threads if replication is disabled.
+ 1. From Ambari UI, add a custom property (in custom hive-site.xml) and remove the unwanted threads.
+ 1. Property name: **metastore.task.threads.always**
+ 1. Default Value: `N/A (it uses few class names internally)`
+ 1. New value: `org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask`
+
+**Query Tuning**
+1. Keep default configs of Hive to run the queries as they're tuned for TPC-DS workloads. Need query level tuning only if it fails or running slow.
+1. Ensure stats are up to date to avoid bad plan or wrong results.
+1. Avoid mixing external and managed ACID tables in join type of queries. In such case, try to convert external to managed non-ACID table through recreation.
+1. In Hive-3, lot of work happened on vectorization, CBO, timestamp with zone etc., which may have product bugs. So, if any query gives wrong results, try disabling vectorization, CBO, map-join etc., to see if that helps.
+
+Other steps to be followed to fix the incorrect results and poor performance after the migration
+
+1. **Issue**
+ Hive query gives the incorrect result. Even the select count(*) query gives the incorrect result.
+
+ **Cause**
+ The property ΓÇ£hive.compute.query.using.statsΓÇ¥ is set to true, by default. If we set it to true, then it uses the stats, which is stored in metastore to execute the query. If the stats aren't up to date, then it results in incorrect results.
+
+ **Resolution**
+ collect the stats for the managed tables using `alter table <table_name> compute statics;` command at the table level and column level. Reference link - https://cwiki.apache.org/confluence/display/hive/statsdev#StatsDev-TableandPartitionStatistics
+
+1. **Issue**
+ Hive queries are taking long time to execute.
+
+ **Cause**
+ If the query has a join condition then hive creates a plan whether to use map join or merge join based on the table size and join condition. If one of the tables contains a small size, then it loads that table in the memory and performs the join operation. This way the query execution is faster when compared to the merge join.
+
+ **Resolution**
+ Make sure to set the property "hive.auto.convert.join=true" which is the default value. Setting it to false uses the merge join and may result in poor performance.
+ Hive decides whether to use map join or not based on following properties, which is set in the cluster
+
+ ```
+ set hive.auto.convert.join=true;
+ set hive.auto.convert.join.noconditionaltask=true;
+ set hive.auto.convert.join.noconditionaltask.size=<value>;
+ set hive.mapjoin.smalltable.filesize = <value>;
+ ```
+ Common join can convert to map join automatically, when `hive.auto.convert.join.noconditionaltask=true`, if estimated size of small table(s) is smaller than hive.`auto.convert.join.noconditionaltask.size` (default value is 10000000 MB).
+
+
+ If you face any issue related to OOM by setting the property `hive.auto.convert.join` to true, then it's advisable to set it to false only for that particular query at the session level and not at the cluster level. This issue might occur if the stats are wrong and Hive decides to use map join based on the stats.
+
+* **Issue**
+ Hive query gives the incorrect result if the query has a join condition and the tables involved has null or empty values.
+
+ **Cause**
+ Sometimes we may get an issue related to null values if the tables involved in the query have lot of null values. Hive performs the query optimization wrongly with the null values involved which results in incorrect results.
+
+ **Resolution**
+ We recommend try setting the property `set hive.cbo.returnpath.hiveop=true` at the session level if you get any incorrect results. This config introduces not null filtering on join keys. If the tables had many null values, for optimizing the join operation between multiple tables, we can enable this config so that it considers only the not null values.
+
+* **Issue**
+ Hive query gives the incorrect result if the query has a multiple join conditions.
+
+ **Cause**
+ Sometime Tez produce bad runtime plans whenever there are same joins multiple time with map-joins.
+
+ **Resolution**
+ There's a chance of getting incorrect results when we set `hive.merge.nway.joins` to false. Try setting it to true only for the query, which got affected. This helps query with multiple joins on the same condition, merge joins together into a single join operator. This method is useful if large shuffle joins to avoid a reshuffle phase.
+
+* **Issue**'
+ There's an increase in time of the query execution day by day when compared to the earlier runs.
+
+ **Cause**
+ This issue might occur if there's an increase in more numbers of small files. So hive takes time in reading all the files to process the data, which results in increase in execution time.
+
+ **Resolution**
+ Make sure to run the compaction frequently for the tables, which are managed. This step avoids the small files and improves the performance.
+
+ Reference link: [Hive Transactions - Apache Hive - Apache Software Foundation](https://cwiki.apache.org/confluence/display/hive/hive+transactions).
++
+* **Issue**
+ Hive query gives incorrect result when customer is using a join condition on managed acid orc table and managed non-ACID orc table.
+
+ **Cause**
+ From HIVE 3 onwards, it's strictly requested to keep all the managed tables as an acid table. And if we want to keep it as an acid table then the table format must be orc and this is the main criteria. But if we disable the strict managed table property ΓÇ£hive.strict.managed.tablesΓÇ¥ to false then we can create a managed non-ACID table. Some case customer creates an external ORC table or after the migration the table converted to an external table and they disable the strict managed table property and convert it to managed table. At this point, the table converted to non-ACID managed orc format.
+
+ **Resolution**
+ Hive optimization goes wrong if you join table with non-ACID managed ORC table with acid managed orc table.
+
+ If you're converting an external table to managed table,
+ 1. DonΓÇÖt set the property ΓÇ£hive.strict.managed.tablesΓÇ¥ to false. If you set then you can create a non-ACID managed table but it's not requested in HIVE-3
+ 1. Convert the external table to managed table using the following alter command instead of `alter table <table_name> set TBLPROPERTIES ('EXTERNAL'='false');`
+ ```
+ alter table rt set TBLPROPERTIES ('EXTERNAL'='false', 'transactional'='true');
+ ```
+ ## Troubleshooting guide [HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for other
## Further reading * [HDInsight 4.0 Announcement](../hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
healthcare-apis How To Use Calculatedcontent Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md
Title: How to use CalculatedContent mappings - Azure Health Data Services
-description: Learn how to use CalculatedContent mappings with the MedTech service device mappings.
+ Title: How to use CalculatedContent mappings with the MedTech service device mapping - Azure Health Data Services
+description: Learn how to use CalculatedContent mappings with the MedTech service device mapping.
Previously updated : 04/14/2023 Last updated : 05/04/2023
-# How to use CalculatedContent mappings
+# How to use CalculatedContent mappings with the MedTech service device mapping
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Title: How to use custom functions with the MedTech service device mappings - Azure Health Data Services
-description: This article describes how to use custom functions with MedTech service device mappings.
+ Title: How to use custom functions with the MedTech service device mapping - Azure Health Data Services
+description: Learn how to use custom functions with MedTech service device mapping.
Previously updated : 04/14/2023 Last updated : 05/05/2023
-# How to use custom functions with device mappings
+# How to use custom functions with the MedTech service device mapping
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](overview-of-device-mapping.md) during the device message [normalization](overview-of-device-data-processing-stages.md#normalize) process.
+Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](overview-of-device-mapping.md) during the device data [normalization](overview-of-device-data-processing-stages.md#normalize) processing stage.
> [!TIP] > For more information on JMESPath functions, see the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions).
Each function has a signature that follows the JMESPath specification. This sign
return_type function_name(type $argname) ```
-The signature indicates the valid types for the arguments. If an invalid type is passed in for an argument, an error will occur.
+The signature indicates the valid types for the arguments. If an invalid type is passed in for an argument, an error occurs.
-> [!NOTE]
-> When math-related functions are done, the end result **must** be able to fit within a [C# long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result is unable to fit within a C# long value, then a mathematical error will occur.
+> [!IMPORTANT]
+> When math-related functions are done, the end result must be able to fit within a [C# long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result is unable to fit within a C# long value, then a mathematical error will occur.
## Exception handling
-Exceptions may occur at various points within the event processing lifecycle. Here are the various points where they can occur:
+Exceptions may occur at various points within the device data processing lifecycle. Here are the various points where exceptions can occur:
-|Action|When|Exceptions that may occur during template parsing|Outcome|
-||-|-|-|
-|**Template parsing**|Each time a new batch of messages is received the Device mapping template is loaded and parsed.|Failure to parse the template.|System will attempt to reload and parse the latest device mapping template until parsing succeeds. No new messages will be processed until parsing is successful.|
-|**Template parsing**|Each time a new batch of messages is received the Device mapping template is loaded and parsed.|Failure to parse any expressions.|System will attempt to reload and parse the latest device mapping template until parsing succeeds. No new messages will be processed until parsing is successful.|
-|**Function Execution**|Each time a function is executed against data within a message.|Input data doesn't match that of the function signature.|System stops processing that message. The message isn't retried.|
-|**Function execution**|Each time a function is executed against data within a message.|Any other exceptions listed in the description of the function.|System stops processing that message. The message isn't retried.|
+|Action|When|Exceptions that may occur during parsing of the device mapping|Outcome|
+||-|--|-|
+|**Device mapping parsing**|Each time a new batch of device messages are received, the device mapping is loaded and parsed.|Failure to parse the device mapping.|System attempts to reload and parse the latest device mapping until parsing succeeds. No new device messages are processed until parsing is successful.|
+|**Device mapping parsing**|Each time a new batch of device messages are received, the device mapping is loaded and parsed.|Failure to parse any expressions.|System attempts to reload and parse the latest device mapping until parsing succeeds. No new device messages are processed until parsing is successful.|
+|**Function execution**|Each time a function is executed against device data within a device message.|Input device data doesn't match that of the function signature.|System stops processing that device message. The device message isn't retried.|
+|**Function execution**|Each time a function is executed against device data within a device message.|Any other exceptions listed in the description of the function.|System stops processing that device message. The device message isn't retried.|
## Mathematical functions
Exceptions may occur at various points within the event processing lifecycle. He
number add(number $left, number $right) ```
-Returns the result of adding the left argument to the right.
+Returns the result of adding the left argument to the right argument.
Examples:
Examples:
number divide(number $left, number $right) ```
-Returns the result of dividing the left argument by the right.
+Returns the result of dividing the left argument by the right argument.
Examples:
Examples:
number multiply(number $left, number $right) ```
-Returns the result of multiplying the left argument with the right.
+Returns the result of multiplying the left argument with the right argument.
Examples:
Examples:
number pow(number $left, number $right) ```
-Returns the result of raising the left argument to the power of the right.
+Returns the result of raising the left argument to the power of the right argument.
Examples:
Examples:
number subtract(number $left, number $right) ```
-Returns the result of subtracting the right argument from the left.
+Returns the result of subtracting the right argument from the left argument.
Examples:
Examples:
string insertString(string $original, string $toInsert, number pos) ```
-Produces a new string by inserting the value of *toInsert* into the string *original*. The string will be inserted at position *pos* within the string *original*.
+Produces a new string by inserting the value of `toInsert` into the string `original`. The string is inserted at position `pos` within the string `original`.
If the positional argument is zero based, the position of zero refers to the first character within the string.
-If the positional argument provided is out of range of the length of *original*, then an error will occur.
+If the positional argument provided is out of range of the length of `original`, then an error occurs.
Examples:
Examples:
| {"unix": 0} | fromUnixTimestampMs(unix) | "1970-01-01T00:00:00+0" | > [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
+> See the MedTech service article [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md) for assistance fixing errors using the MedTech service logs.
## Next steps
-In this article, you learned how to use the MedTech service custom functions with the device mappings.
+In this article, you learned how to use the MedTech service custom functions within the device mapping.
-To learn how to configure the MedTech service device mappings, see
+For an overview of the MedTech service device mapping, see
> [!div class="nextstepaction"] > [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
healthcare-apis How To Use Iotjsonpathcontent Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-mappings.md
Title: How to use IotJsonPathContent mappings in the MedTech service device mappings - Azure Health Data Services
-description: This article describes how to use IotJsonPathContent mappings with the MedTech service device mappings.
+ Title: How to use IotJsonPathContent mappings with the MedTech service device mapping - Azure Health Data Services
+description: Learn how to use IotJsonPathContent mappings with the MedTech service device mapping.
Previously updated : 04/14/2023 Last updated : 05/04/2023
-# How to use IotJsonPathContent mappings
+# How to use IotJsonPathContent mappings with the MedTech service device mapping
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification. This article describes how to use IoTJsonPathContent mappings with the MedTech service [device mapping](overview-of-device-mapping.md).
-## IotJsonPathContent
+## Overview of IotJsonPathContent mappings
The IotJsonPathContent is similar to the JsonPathContent except the `DeviceIdExpression` and `TimestampExpression` aren't required.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
General availability (GA) of Azure Health Data services in West Central US regio
#### FHIR Service **Fixed performance for Search Queries with identifiers**+ This bug fix addresses timeout issues observed for search queries with identifiers, by leveraging OPTIMIZE clause. For more details, visit [#3207](https://github.com/microsoft/fhir-server/pull/3207) **Fixed transient issues associated with loading custom search parameters**+ This bug fix addresses the issue, where the FHIR service would not load the latest SearchParameter status in event of failure. For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/3222)
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-test-certificates.md
Your IoT device also needs a copy of its device certificates so that it can auth
3. Retrieve the SHA1 thumbprint (called a thumbprint in IoT Hub contexts) from each certificate. The thumbprint is a 40 hexadecimal character string. Use the following openssl command to view the certificate and find the thumbprint: ```PowerShell
- openssl x509 -in certs\iot-device-<device name>-primary.cert.pem -text -thumbprint
+ Write-Host (Get-Pfxcertificate -FilePath certs\iot-device-<device name>-primary.cert.pem).Thumbprint
``` Run this command twice, once for the primary certificate and once for the secondary certificate. You provide thumbprints for both certificates when you register a new IoT device using self-signed X.509 certificates.
iot Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-introduction.md
Previously updated : 03/24/2023 Last updated : 05/02/2023 #Customer intent: As a newcomer to IoT, I want to understand what IoT is, what services are available, and examples of business cases so I can figure out where to start.
# What is Azure Internet of Things (IoT)?
-The Azure Internet of Things (IoT) is a collection of Microsoft-managed cloud services that let you connect, monitor, and control your IoT assets at scale. In simpler terms, an IoT solution is made up of IoT devices that communicate with cloud services.
+The Azure Internet of Things (IoT) is a collection of Microsoft-managed cloud services, edge components, and SDKs that let you connect, monitor, and control your IoT assets at scale. In simpler terms, an IoT solution is made up of IoT devices that communicate with cloud services.
The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the key groups of components: devices, IoT cloud services, other cloud services, and solution-wide concerns. Other articles in this section provide more detail on each of these components. :::image type="content" source="media/iot-introduction/iot-architecture.svg" alt-text="Diagram that shows the high-level IoT solution architecture." border="false":::
+## Solution options
+
+To build an IoT solution for your business, you typically evaluate your solution by using the *managed app platform* approach and build your enterprise solution by using the *platform services*.
+
+A managed app platform lets you quickly evaluate your IoT solution by reducing the number of decisions needed to achieve results. The managed app platform takes care of most infrastructure elements in your solution, letting you focus on adding industry knowledge and evaluating the solution. Azure IoT Central is a managed app platform.
+
+Platform services provide all the building blocks for customized and flexible IoT applications. You have more options to choose and code when you connect your devices, and ingest, store, and analyze your data. Azure IoT platform services include Azure IoT Hub, Device Provisioning Service, and Azure Digital Twins.
+
+| Managed app platform | Platform services |
+|-|-|
+| Take advantage of a platform that handles the security and management of your IoT applications and devices. | Have full control over the underlying services in your solution. For example: </br> Scaling and securing services to meet your needs. </br> Using in-house or partner expertise to onboard devices and provision services. |
+| Customize branding, dashboards, user roles, devices, and telemetry. However, you can't customize the underlying IoT services. | Fully customize and control your IoT solution. |
+| Has a simple, predictable pricing structure. | Let you fine-tune services to control overall costs. |
+
+To learn more, see [What Azure technologies and services can you use to create IoT solutions?](iot-services-and-technologies.md).
+ ## IoT devices An IoT device is typically made up of a circuit board with sensors attached that uses WiFi to connect to the internet. For example:
There's a wide variety of devices available from different manufacturers to buil
Microsoft provides open-source [Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) that you can use to build the apps that run on your devices.
-To learn more, see [IoT device development](iot-overview-device-development.md).
+> [!IMPORTANT]
+> Because IoT Central uses IoT Hub internally, any device that can connect to an IoT Central application can also connect to an IoT hub.
+
+To learn more about the devices in your IoT solution, see [IoT device development](iot-overview-device-development.md).
## Connectivity
-Typically, IoT devices send telemetry from the sensors to cloud services in your solution. However, other types of communication are possible such as a cloud service sending commands to your devices. The following are some examples of device-to-cloud and cloud-to-device communication:
+Typically, IoT devices send telemetry from their attached sensors to cloud services in your solution. However, other types of communication are possible such as a cloud service sending commands to your devices. The following are examples of device-to-cloud and cloud-to-device communication:
* A mobile refrigeration truck sends temperature every 5 minutes to an IoT Hub. * A cloud service sends a command to a device to change the frequency at which it sends telemetry to help diagnose a problem.
-* A device sends alerts based on the values read from its sensors. For example, a device monitoring a batch reactor in a chemical plant, sends an alert when the temperature exceeds a certain value.
+* A device monitoring a batch reactor in a chemical plant sends an alert when the temperature exceeds a certain value.
+
+* A thermostat reports the maximum temperature the device has reached since the last reboot.
-* Your devices send information to display on a dashboard for viewing by human operators. For example, a control room in a refinery may show the temperature, pressure, and flow volumes in each pipe, enabling operators to monitor the facility.
+* A cloud service sets the target temperature for a thermostat device.
-The [IoT Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) and IoT Hub support common [communication protocols](../iot-hub/iot-hub-devguide-protocols.md) such as HTTP, MQTT, and AMQP.
+The [IoT Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) and IoT Hub support common [communication protocols](../iot-hub/iot-hub-devguide-protocols.md) such as HTTP, MQTT, and AMQP for device-to-cloud and cloud-to-device communication. In some scenarios, you may need a gateway to connect your IoT devices to your cloud services.
-IoT devices have different characteristics when compared to other clients such as browsers and mobile apps. The device SDKs help you address the challenges of connecting devices securely and reliably to your cloud services. Specifically, IoT devices:
+IoT devices have different characteristics when compared to other clients such as browsers and mobile apps. Specifically, IoT devices:
* Are often embedded systems with no human operator. * Can be deployed in remote locations, where physical access is expensive.
IoT devices have different characteristics when compared to other clients such a
* May have intermittent, slow, or expensive network connectivity. * May need to use proprietary, custom, or industry-specific application protocols.
-To learn more, see [Device infrastructure and connectivity](iot-overview-device-connectivity.md).
+The device SDKs help you address the challenges of connecting devices securely and reliably to your cloud services.
+
+To learn more device connectivity and gateways, see [Device infrastructure and connectivity](iot-overview-device-connectivity.md).
## Cloud services In an IoT solution, the cloud services typically:
-* Receive telemetry at scale from your devices, and determining how to process and store that data.
+* Receive telemetry at scale from your devices, and determine how to process and store that data.
* Analyze the telemetry to provide insights, either in real time or after the fact.
-* Send commands from the cloud to a specific device.
-* Provision devices and controlling which devices can connect to your infrastructure.
-* Control the state of your devices and monitoring their activities.
+* Send commands from the cloud to specific devices.
+* Provision devices and control which devices can connect to your infrastructure.
+* Control the state of your devices and monitor their activities.
* Manage the firmware installed on your devices.
-For example, in a remote monitoring solution for an oil pumping station, the services use telemetry from the pumps to identify anomalous behavior. When a cloud service identifies an anomaly, it can automatically send a command back to the device to take a corrective action. This process generates an automated feedback loop between the device and the cloud that greatly increases the solution efficiency.
+For example, in a remote monitoring solution for an oil pumping station, the services use telemetry from the pumps to identify anomalous behavior. When a cloud service identifies an anomaly, it can automatically send a command to the device to take a corrective action. This process implements an automated feedback loop between the device and the cloud that greatly increases the solution efficiency.
+
+Some cloud services, such as IoT Hub and the Device Provisioning Service, are IoT specific. Other cloud services, such as storage and visualization, provide generic services to your solution.
+
+To learn more, see:
-Some cloud services, such as IoT Hub and the Device Provisioning Service, are IoT specific. Other cloud services can provide generic services to your solution such as storage and visualizations.
+- [Device management and control](iot-overview-device-management.md)
+- [Message processing in an IoT solution](iot-overview-message-processing.md)
+- [Extend your IoT solution](iot-overview-solution-extensibility.md)
+- [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)
## Solution-wide concerns
-Any IoT solution has to address the following solution-wide concerns:
+Any IoT solution must address the following solution-wide concerns:
* [Security](iot-security-best-practices.md) including physical security, authentication, authorization, and encryption
-* Solution management including deployment and monitoring.
+* [Solution management](iot-overview-solution-management.md) including deployment and monitoring.
* High availability and disaster recovery for all the components in your solution. * Scalability for all the services in your solution.
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
IoT Central provides a rich set of features that you can use to analyze and visu
Now that you've seen an overview of the analysis and visualization options available to your IoT solution, some suggested next steps include: -- [Choose the right IoT solution](iot-solution-options.md)
+- [IoT solution options](iot-introduction.md#solution-options)
- [Azure IoT services and technologies](iot-services-and-technologies.md)
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
Last updated 04/03/2023
-# As a solution builder, I want a high-level overview of the options for extensing an IoT solution so that I can easily find relevant content for my scenario.
+# As a solution builder, I want a high-level overview of the options for extending an IoT solution so that I can easily find relevant content for my scenario.
# Extend your IoT solution
The IoT Central application templates provide a starting point for building IoT
Now that you've seen an overview of the extensibility options available to your IoT solution, some suggested next steps include: - [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)-- [Choose the right IoT solution](iot-solution-options.md)
+- [IoT solution options](iot-introduction.md#solution-options)
iot Iot Overview Solution Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-management.md
+
+ Title: Manage your IoT solution
+description: An overview of the management options for an IoT solution such as the Azure portal and ARM templates.
+++++ Last updated : 05/04/2023++
+# As a solution builder, I want a high-level overview of the options for managing an IoT solution so that I can easily find relevant content for my scenario.
++
+# Manage your IoT solution
+
+This overview introduces the key concepts around the options to manage an Azure IoT solution. Each section includes links to content that provides further detail and guidance.
+
+The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the areas relevant to managing an IoT solution.
++
+There are many options for managing your IoT solution including the Azure portal, PowerShell, and ARM templates. This article summarizes the main options.
+
+## Monitoring
+
+While there are tools specifically for [monitoring devices](iot-overview-device-management.md#device-monitoring) in your IoT solution, you also need to be able to monitor the health of your IoT
+
+| Service | Monitoring options |
+||--|
+| IoT Hub | [Use Azure Monitor to monitor your IoT hub](../iot-hub/monitor-iot-hub.md) </br> [Check IoT Hub service and resource health](../iot-hub/iot-hub-azure-service-health-integration.md) |
+| Device Provisioning Service (DPS) | [Use Azure Monitor to monitor your DPS instance](../iot-dps/monitor-iot-dps.md) |
+| IoT Edge | [Use Azure Monitor to monitor your IoT Edge fleet](../iot-edge/how-to-collect-and-transport-metrics.md) </br> [Monitor IoT Edge deployments](../iot-edge/how-to-monitor-iot-edge-deployments.md) |
+| IoT Central | [Use audit logs to track activity in your IoT Central application](../iot-central/core/howto-use-audit-logs.md) </br> [Use Azure Monitor to monitor your IoT Central application](../iot-central/core/howto-manage-iot-central-from-portal.md#monitor-application-health) |
+| Azure Digital Twins | [Use Azure Monitor to monitor Azure Digital Twins resources](../digital-twins/how-to-monitor.md) |
+
+## Azure portal
+
+The Azure portal offers a consistent GUI environment for managing your Azure IoT services. For example, you can use the portal to:
+
+| Action | Links |
+|--|-|
+| Deploy service instances in your Azure subscription | [Manage your IoT hubs](../iot-hub/iot-hub-create-through-portal.md) </br>[Set up DPS](../iot-dps/quick-setup-auto-provision.md) </br> [Manage IoT Central applications](../iot-central/core/howto-manage-iot-central-from-portal.md) </br> [Set up an Azure Digital Twins instance](../digital-twins/how-to-set-up-instance-portal.md) |
+| Configure services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-portal.md) </br> [Deploy IoT Edge modules](../iot-edge/how-to-deploy-at-scale.md) </br> [Configure file uploads (IoT Hub)](../iot-hub/iot-hub-configure-file-upload.md) </br> [Manage device enrollments (DPS)](../iot-dps/how-to-manage-enrollments.md) </br> [Manage allocation policies (DPS)](../iot-dps/how-to-use-allocation-policies.md) |
+
+## ARM templates and Bicep
+
+To implement infrastructure as code for your Azure IoT solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. Bicep is a new language that offers the same capabilities as ARM templates but with a syntax that's easier to use.
+
+For example, you can use ARM templates or Bicep to:
+
+| Action | Links |
+|--|-|
+| Deploy service instances in your Azure subscription | [Create an IoT hub](../iot-hub/iot-hub-rm-template-powershell.md) </br> [Set up DPS](../iot-dps/quick-setup-auto-provision-bicep.md) |
+| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-arm.md) </br> [Azure Resource Manager SDK samples (IoT Central)](https://github.com/Azure-Samples/azure-iot-central-arm-sdk-samples) |
+
+For ARM templates and Bicep reference documentation, see:
+
+- [IoT Hub](/azure/templates/microsoft.devices/iothubs)
+- [DPS](/azure/templates/microsoft.devices/provisioningservices)
+- [Device update for IoT Hub](/azure/templates/microsoft.deviceupdate/accounts)
+- [IoT Central](/azure/templates/microsoft.iotcentral/iotapps)
+
+## PowerShell
+
+Use PowerShell to automate the management of your IoT solution. For example, you can use PowerShell to:
+
+| Action | Links |
+|--|-|
+| Deploy service instances in your Azure subscription | [Create an IoT hub using the New-AzIotHub cmdlet](../iot-hub/iot-hub-create-using-powershell.md) </br> [Create an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-powershell#create-an-application) |
+| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-powershell.md) </br> [Manage an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-powershell#modify-an-application) |
+
+For PowerShell reference documentation, see:
+
+- [Az.IotHub](/powershell/module/az.iothub/) module
+- [Az.IotCentral](/powershell/module/az.iothub/) module
+- [PowerShell functions for IoT Edge for Linux on Windows](../iot-edge/reference-iot-edge-for-linux-on-windows-functions.md)
+
+## Azure CLI
+
+Use the Azure CLI to automate the management of your IoT solution. For example, you can use the Azure CLI to:
+
+| Action | Links |
+|--|-|
+| Deploy service instances in your Azure subscription | [Create an IoT hub using the Azure CLI](../iot-hub/iot-hub-create-using-cli.md) </br> [Create an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-cli#create-an-application) </br> [Set up an Azure Digital Twins instance](../digital-twins/how-to-set-up-instance-cli.md) </br> [Set up DPS](../iot-dps/quick-setup-auto-provision-cli.md) |
+| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-azure-cli.md) </br> [Deploy and monitor IoT Edge modules at scale](../iot-edge/how-to-deploy-cli-at-scale.md) </br> [Manage an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-cli#modify-an-application) </br> [Create an Azure Digital Twins graph](../digital-twins/tutorial-command-line-cli.md) |
+
+For Azure CLI reference documentation, see:
+
+- [`az iot hub`](/cli/azure/iot/hub)
+- [`az iot device` (IoT Hub)](/cli/azure/iot/device)
+- [`az iot edge`](/cli/azure/iot/edge)
+- [`az iot dps`](/cli/azure/iot/dps)
+- [`az iot central`](/cli/azure/iot/central)
+- [`az iot du` (Azure Device Update)](/cli/azure/iot/du)
+- [`az dt` (Azure Digital Twins)](/cli/azure/dt)
+
+## Azure DevOps tools
+
+Use Azure DevOps tools to automate the management of your IoT solution. For example, you can use Azure DevOps tools to enable:
+
+- [Continuous integration and continuous deployment to Azure IoT Edge devices](../iot-edge/how-to-continuous-integration-continuous-deployment.md)
+- [Integration of IoT Central with Azure Pipelines for CI/CD](../iot-central/core/howto-integrate-with-devops.md)
+
+## Next steps
+
+Now that you've seen an overview of the extensibility options available to your IoT solution, some suggested next steps include:
+
+- [What Azure technologies and services can you use to create IoT solutions?](iot-services-and-technologies.md)
+- [IoT solution options](iot-introduction.md#solution-options)
iot Iot Services And Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-services-and-technologies.md
Previously updated : 11/29/2022 Last updated : 05/02/2023
Azure IoT technologies and services provide you with options to create a wide variety of IoT solutions that enable digital transformation for your organization. For example, you can: * Use [Azure IoT Central](https://apps.azureiotcentral.com), a managed IoT application platform, to evaluate your IoT solution.
-* Use Azure IoT platform services such as [Azure IoT Hub](../iot-hub/about-iot-hub.md) and the [Azure IoT device SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build a custom IoT solution from scratch.
+* Use Azure IoT platform services such as [Azure IoT Hub](../iot-hub/about-iot-hub.md) and the [Device Provisioning Service](../iot-dps/about-iot-dps.md) to build a custom IoT solution from scratch.
-![Azure IoT technologies, services, and solutions](./media/iot-services-and-technologies/iot-technologies-services.png)
+## Devices and device SDKs
-## Azure IoT Central
+You can choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). You can implement your own embedded code using the open-source [device SDKs](./iot-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
-[IoT Central](https://apps.azureiotcentral.com) is an IoT application platform as a service (aPaaS) that reduces the burden and cost of developing, managing, and maintaining IoT solutions. Use IoT Central to quickly evaluate your IoT scenario and assess the opportunities it can create for your business. IoT Central streamlines the development of a complex and continually evolving IoT infrastructure by letting you to focus on determining the business impact you can create with your IoT data.
+You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to configure a cloud-based solution such as an IoT Central application.
-The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications. Once you've used IoT Central to evaluate your IoT scenario, you can then build your enterprise ready solutions by using the power of Azure IoT platform.
+[Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central.
-Choose devices from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com) to quickly connect to your solution. Use the IoT Central web UI to monitor and manage your devices to keep them healthy and connected. Use connectors and APIs to integrate your IoT Central application with other business applications.
+[Azure Sphere](/azure-sphere/product-overview/what-is-azure-sphere) is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured microcontroller unit, a custom Linux-based operating system, and a cloud-based security service that provides continuous, renewable security.
-As a fully managed application platform, IoT Central has a simple, predictable pricing model.
+> [!IMPORTANT]
+> Because IoT Central uses IoT Hub internally, any device that can connect to an IoT Central application can also connect to an IoT hub.
-## Custom solutions
+To learn more, see [Azure IoT device and application development](../iot-develop/about-iot-develop.md).
-To build an IoT solution from scratch, use one or more of the following Azure IoT technologies and
+## Azure IoT Central
-### Devices
+[IoT Central](https://apps.azureiotcentral.com) is a managed app platform that reduces the burden and cost of developing, managing, and maintaining IoT solutions. Use IoT Central to quickly evaluate your IoT scenario and assess the opportunities it can create for your business. IoT Central streamlines the development of a complex and continually evolving IoT infrastructure by letting you to focus on determining the business impact you can create with your IoT data.
-Develop your IoT devices using one of the [Azure IoT Starter Kits](/samples/azure-samples/azure-iot-starter-kits/azure-iot-starter-kits/) or choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
+The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications. Once you've used IoT Central to evaluate your IoT scenario, you can then build your enterprise ready solutions by using the power of Azure IoT platform.
-You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to generate your embedded device code and configure a cloud-based solution such as an IoT Central application.
+As a fully managed app platform, IoT Central has a simple, predictable pricing model.
-[Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central.
+## Custom solutions
-[Azure Sphere](/azure-sphere/product-overview/what-is-azure-sphere) is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured microcontroller unit, a custom Linux-based operating system, and a cloud-based security service that provides continuous, renewable security.
+To build an IoT solution from scratch, use one or more of the following Azure IoT technologies and
### Cloud connectivity
IoT Central uses digital twins to synchronize devices and data in the real world
### Data and analytics
-IoT devices typically generate large amounts of time series data, such as temperature readings from sensors. [Azure Time Series Insights](../time-series-insights/time-series-insights-overview.md) can connect to an IoT hub, read the telemetry stream from your devices, store that data, and enable you to query and visualize it.
+IoT devices typically generate large amounts of time series data, such as temperature readings from sensors. [Azure Data Explorer](/azure/data-explorer/ingest-data-iot-hub-overview) can connect to an IoT hub, read the telemetry stream from your devices, store that data, and enable you to query and visualize it.
[Azure Maps](../azure-maps/index.yml) is a collection of geospatial services that use fresh mapping data to provide accurate geographic context to web and mobile applications. You can use a REST API, a web-based JavaScript control, or an Android SDK to build your applications.
iot Iot Solution Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-solution-options.md
- Title: Azure Internet of Things (IoT) solution options
-description: Guidance on using platform services and a managed app platform when you're evaluating and building an IoT solution. Platform services, such as IoT Hub and Digital Twins, are building blocks for your IoT solutions. A managed app platform, such as IoT Central, lets you quickly get started evaluating an IoT solution.
---- Previously updated : 11/29/2022---
-# What is the right IoT solution for your business?
-
-To build an IoT solution for your business, you typically evaluate your solution by using the *managed app platform* approach and build your enterprise solution by using the *platform services*.
-
-Platform services provide all the building blocks for customized and flexible IoT applications. You have more options to choose and code when you connect devices, and ingest, store, and analyze your data. Azure IoT platform services include the products Azure IoT Hub and Azure Digital Twins.
-
-A managed app platform lets you quickly evaluate your IoT solution by reducing the number of decisions needed to achieve results. The managed app platform takes care of most infrastructure elements in your solution, so you can focus on adding industry knowledge, and evaluating the solution. Azure IoT Central is a managed app platform.
-
-## Management
-
-The platform services give you full control over the underlying services in your solution. For example:
--- Scaling and securing services to meet your needs.-- Using in-house or partner expertise to onboard devices and provision services.-
-A managed app platform lets you take advantage of a platform that handles the security and management of your IoT applications and devices.
-
-## Control
-
-Platform services let you fully customize and control the solution architecture.
-
-A managed app platform lets you customize branding, dashboards, user roles, devices, and telemetry. However, you don't handle the underlying IoT system management.
-
-## Pricing
-
-Platform services let you fine-tune services and control overall costs.
-
-A managed app platform gives you a simple, predictable pricing structure.
-
-## Summary
-
-Platform services approach let you:
--- Fine-tune the services in the solution.-- Have a high degree of control over the services in the solution.-- Fully customize the solution.-
-A managed app platform is useful when you're evaluating an IoT solution and:
--- Don't want to dedicate extensive resources to system design, development, and management.-- Do want a predictable pricing structure.-- Do want some customization capabilities.-
-## Next steps
-
-For a more comprehensive explanation of the different services and platforms, and how they're used, see [Azure IoT services and technologies](iot-services-and-technologies.md).
-
-To learn more about the key attributes of successful IoT solutions, see the [8 attributes of successful IoT solutions](https://aka.ms/8attributes) white paper.
-
-For an in-depth discussion of IoT architecture, see the [Microsoft Azure IoT Reference Architecture](/azure/architecture/reference-architectures/iot).
-
-To learn about the device migration tool, see [Migrate devices from Azure IoT Central to Azure IoT Hub](../iot-central/core/howto-migrate-to-iot-hub.md).
key-vault Create Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate.md
When a request to create a KV certificate completes, the status of the pending o
``` ## Partnered CA Providers+ Certificate creation can be completed manually or using a "Self" issuer. Key Vault also partners with certain issuer providers to simplify the creation of certificates. The following types of certificates can be ordered for key vault with these partner issuer providers. |Provider|Certificate type|Configuration setup
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
GlobalSignCA is now in the certificate authority list.
You can use Azure PowerShell to create and manage Azure resources by using commands or scripts. Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through the Azure portal in a browser.
-If you choose to install and use PowerShell locally, you need Azure AZ PowerShell module 1.0.0 or later to complete the procedures here. Type `$PSVersionTable.PSVersion` to determine the version. If you need to upgrade, see [Install Azure AZ PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure:
+If you choose to install and use PowerShell locally, you need Azure AZ PowerShell module 1.0.0 or later to complete the procedures here. Type `$PSVersionTable.PSVersion` to determine the version. If you need to upgrade, see [Install Azure AZ PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure:
```azurepowershell-interactive
-Login-AzAccount
+Connect-AzAccount
``` 1. Create an Azure resource group by using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed.
key-vault Overview Renew Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/overview-renew-certificate.md
By using short-lived certificates or by increasing the frequency of certificate
This article discusses how to renew your Azure Key Vault certificates. ## Get notified about certificate expiration+ To get notified about certificate life events, you would need to add certificate contact. Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault. ### Steps to set certificate notifications
By using Azure Key Vault, you can import certificates from any CA, a benefit tha
To renew a nonintegrated CA certificate:
+# [Azure portal](#tab/azure-portal)
+ 1. Sign in to the Azure portal, and then open the certificate you want to renew. 1. On the certificate pane, select **New Version**. 3. On the **Create a certificate** page, make sure the **Generate** option is selected under **Method of Certificate Creation**.
To renew a nonintegrated CA certificate:
1. Bring back the signed request, and select **Merge Signed Request** on the same certificate operation pane. 10. The status after merging will show **Completed** and on the main certificate pane you can hit **Refresh** to see the new version of the certificate.
+# [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI [az keyvault certificate create](/cli/azure/keyvault/certificate#az-keyvault-certificate-create) command, providing the name of the certificate you wish to renew:
+
+```azurecli-interactive
+az keyvault certificate create --vault-name "<your-unique-keyvault-name>" -n "<name-of-certificate-to-renew>" -p "$(az keyvault certificate get-default-policy)"
+```
+
+After renewing the certificate, you can view all the versions of the certificate using the Azure CLI [az keyvault certificate list-versions](/cli/azure/keyvault/certificate#az-keyvault-certificate-list) command:
+
+```azurecli-interactive
+az keyvault certificate list-versions --vault-name "<your-unique-keyvault-name>" -n "<name-of-renewed-certificate>"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Use the Azure PowerShell [New-AzKeyVaultCertificatePolicy](/powershell/module/az.keyvault/new-azkeyvaultcertificatepolicy) cmdlet, providing the name of the certificate you wish to renew:
+
+```azurepowershell-interactive
+$Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal
+
+Add-AzKeyVaultCertificate -VaultName "<your-unique-keyvault-name>" -Name "<name-of-certificate-to-renew>" -CertificatePolicy $Policy
+```
+
+After renewing the certificate, you can view all the versions of the certificate using the Azure PowerShell [Get-AzKeyVaultCertificate](/cli/azure/keyvault/certificate#az-keyvault-certificate-list) cmdlet:
+
+```azurepowershell-interactive
+Get-AzKeyVaultCertificate "<your-unique-keyvault-name>" -Name "<name-of-renewed-certificate>" -IncludeVersions
+```
+++ > [!NOTE] > It's important to merge the signed CSR with the same CSR request that you created. Otherwise, the key won't match.
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive
-Login-AzAccount
+Connect-AzAccount
``` ## Create a resource group
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
After importing the certificate, you can view the certificate using the Azure CL
az keyvault certificate show --vault-name "<your-key-vault-name>" --name "ExampleCertificate" ``` - # [Azure PowerShell](#tab/azure-powershell) You can import a certificate into Key Vault using the Azure PowerShell [Import-AzKeyVaultCertificate](/powershell/module/az.keyvault/import-azkeyvaultcertificate) cmdlet.
key-vault Assign Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/assign-access-policy.md
For more information on creating groups in Azure Active Directory using Azure Po
1. Sign in to Azure: ```azurepowershell-interactive
- Login-AzAccount
+ Connect-AzAccount
``` ## Acquire the object ID
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-In this quickstart, you create a key vault with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+In this quickstart, you create a key vault with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive
-Login-AzAccount
+Connect-AzAccount
``` ## Create a resource group
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive
-Login-AzAccount
+Connect-AzAccount
``` ## Create a resource group
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
Title: Create and retrieve attributes of a managed key in Azure Key Vault ΓÇô Az
description: Quickstart showing how to set and retrieve a managed key from Azure Key Vault using Azure PowerShell Previously updated : 03/24/2023 Last updated : 05/05/2023
In this quickstart, you will create and activate an Azure Key Vault Managed HSM (Hardware Security Module) with PowerShell. Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. For more information on Managed HSM, you may review the [Overview](overview.md).
-If you do not have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-The service is available in limited regions ΓÇô To learn more about availability, please see [Azure Dedicated HSM purchase options](https://azure.microsoft.com/pricing/details/azure-dedicated-hsm).
--
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive
-Login-AzAccount
+Connect-AzAccount
``` ## Create a resource group
Use the Azure PowerShell [New-AzKeyVaultManagedHsm](/powershell/module/az.keyvau
- Your principal ID: Pass the Azure Active Directory principal ID that you obtained in the last section to the "Administrator" parameter. ```azurepowershell-interactive
-New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "eastus2" -Administrator "<your-principal-ID>"
+New-AzKeyVaultManagedHsm -Name "your-unique-managed-hsm-name" -ResourceGroupName "myResourceGroup" -Location "eastus2" -Administrator "your-principal-ID" -SoftDeleteRetentionInDays "# of days to retain the managed hsm after softdelete"
``` > [!NOTE] > The create command can take a few minutes. Once it returns successfully you are ready to activate your HSM.
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md
See the following articles:
- As an admin, see [VM sizing](administrator-guide.md#vm-sizing). - As an admin, see [Request a capacity increase](./how-to-request-capacity-increase.md)-- [Frequently asked questions](classroom-labs-faq.yml).
lighthouse Tenants Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md
Title: Tenants, users, and roles in Azure Lighthouse scenarios description: Understand how Azure Active Directory tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 01/13/2023 Last updated : 05/04/2023
When creating your authorizations, we recommend the following best practices:
- In most cases, you'll want to assign permissions to an Azure AD user group or service principal, rather than to a series of individual user accounts. This lets you add or remove access for individual users through your tenant's Azure AD, rather than having to [update the delegation](../how-to/update-delegation.md) every time your individual access requirements change. - Follow the principle of least privilege so that users only have the permissions needed to complete their job, helping to reduce the chance of inadvertent errors. For more information, see [Recommended security practices](../concepts/recommended-security-practices.md).-- Include an authorization with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) so that you can [remove access to the delegation](../how-to/remove-delegation.md) later if needed. If this role is not assigned, access to delegated resources can only be removed by a user in the customer's tenant.-- Be sure that any user who needs to [view the My customers page in the Azure portal](../how-to/view-manage-customers.md) has the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access).
+- Include an authorization with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) so that you can [remove access to the delegation](../how-to/remove-delegation.md) later if needed. If this role isn't assigned, access to delegated resources can only be removed by a user in the customer's tenant.
+- Be sure that any user who needs to [view the My customers page in the Azure portal](../how-to/view-manage-customers.md) has the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role that includes Reader access).
> [!IMPORTANT] > In order to add permissions for an Azure AD group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). ## Role support for Azure Lighthouse
-When defining an authorization, each user account must be assigned one of the [Azure built-in roles](../../role-based-access-control/built-in-roles.md). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) are not supported.
+When you define an authorization, each user account must be assigned one of the [Azure built-in roles](../../role-based-access-control/built-in-roles.md). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) are not supported.
All [built-in roles](../../role-based-access-control/built-in-roles.md) are currently supported with Azure Lighthouse, with the following exceptions: - The [Owner](../../role-based-access-control/built-in-roles.md#owner) role is not supported.-- Any built-in roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.-- The [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) built-in role is supported, but only for the limited purpose of [assigning roles to a managed identity in the customer tenant](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). No other permissions typically granted by this role will apply. If you define a user with this role, you must also specify the built-in role(s) that this user can assign to managed identities.-
-In some cases, a role that had previously been supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned the role will still be able to work on previously delegated resources, but they won't be able to perform tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission.
+- The [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role is supported, but only for the limited purpose of [assigning roles to a managed identity in the customer tenant](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). No other permissions typically granted by this role will apply. If you define a user with this role, you must also specify the role(s) that this user can assign to managed identities.
+- Any roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.
+- Roles that include any of the following [actions](../../role-based-access-control/role-definitions.md#actions) are not supported:
+
+ - */write
+ - */delete
+ - Microsoft.Authorization/*
+ - Microsoft.Authorization/*/write
+ - Microsoft.Authorization/*/delete
+ - Microsoft.Authorization/roleAssignments/write
+ - Microsoft.Authorization/roleAssignments/delete
+ - Microsoft.Authorization/roleDefinitions/write
+ - Microsoft.Authorization/roleDefinitions/delete
+ - Microsoft.Authorization/classicAdministrators/write
+ - Microsoft.Authorization/classicAdministrators/delete
+ - Microsoft.Authorization/locks/write
+ - Microsoft.Authorization/locks/delete
+ - Microsoft.Authorization/denyAssignments/write
+ - Microsoft.Authorization/denyAssignments/delete
> [!IMPORTANT]
-> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md) specified for each role. In some cases, even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported, the actions included in a role may allow access to data, where data is exposed through access keys and not accessed via the user's identity. For example, the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data.
+> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md#actions) specified for each role. In some cases, even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported, the actions included in a role may allow access to data, where data is exposed through access keys and not accessed via the user's identity. For example, the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data.
-> [!NOTE]
-> As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly-added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a period of time; however, you won't be able to publish new offers using such roles.
+In some cases, a role that was previously supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned the role will still be able to work on previously delegated resources, but they won't be able to perform tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission.
+
+As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a while; however, you won't be able to publish new offers using such roles.
## Transferring delegated subscriptions between Azure AD tenants
-If a subscription is [transferred to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account), the [registration definition and registration assignment resources](architecture.md#delegation-resources-created-in-the-customer-tenant) created through the [Azure Lighthouse onboarding process](../how-to/onboard-customer.md) will be preserved. This means that access granted through Azure Lighthouse to managing tenants will remain in effect for that subscription (or for delegated resource groups within that subscription).
+If a subscription is [transferred to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account), the [registration definition and registration assignment resources](architecture.md#delegation-resources-created-in-the-customer-tenant) created through the [Azure Lighthouse onboarding process](../how-to/onboard-customer.md) are preserved. This means that access granted through Azure Lighthouse to managing tenants remains in effect for that subscription (or for delegated resource groups within that subscription).
-The only exception is if the subscription is transferred to an Azure AD tenant to which it had been previously delegated. In this case, the delegation resources for that tenant are removed and the access granted through Azure Lighthouse will no longer apply, since the subscription now belongs directly to that tenant (rather than being delegated to it through Azure Lighthouse). However, if that subscription had also been delegated to other managing tenants, those other managing tenants will retain the same access to the subscription.
+The only exception is if the subscription is transferred to an Azure AD tenant to which it had been previously delegated. In this case, the delegation resources for that tenant are removed and the access granted through Azure Lighthouse no longer applies, since the subscription now belongs directly to that tenant (rather than being delegated to it through Azure Lighthouse). However, if that subscription was also delegated to other managing tenants, those other managing tenants will retain the same access to the subscription.
## Next steps
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-customer.md
Title: Onboard a customer to Azure Lighthouse description: Learn how to onboard a customer to Azure Lighthouse, allowing their resources to be accessed and managed by users in your tenant. Previously updated : 11/28/2022 Last updated : 05/04/2023 ms.devlang: azurecli
If you are unable to successfully onboard your customer, or if your users have t
- The `managedbyTenantId` value must not be the same as the tenant ID for the subscription being onboarded. - You can't have multiple assignments at the same scope with the same `mspOfferName`. - The **Microsoft.ManagedServices** resource provider must be registered for the delegated subscription. This should happen automatically during the deployment but if not, you can [register it manually](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- Authorizations must not include any users with the [Owner](../../role-based-access-control/built-in-roles.md#owner) built-in role or any built-in roles with [DataActions](../../role-based-access-control/role-definitions.md#dataactions).
+- Authorizations must not include any users with the [Owner](../../role-based-access-control/built-in-roles.md#owner) role, any roles with [DataActions](../../role-based-access-control/role-definitions.md#dataactions), or any roles that include [restricted actions](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse).
- Groups must be created with [**Group type**](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md#group-types) set to **Security** and not **Microsoft 365**. - If access was granted to a group, check to make sure the user is a member of that group. If they aren't, you can [add them to the group using Azure AD](../../active-directory/fundamentals/active-directory-groups-members-azure-portal.md), without having to perform another deployment. Note that [group owners](../../active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md) are not necessarily members of the groups they manage, and may need to be added in order to have access. - There may be an additional delay before access is enabled for [nested groups](../..//active-directory/fundamentals/active-directory-groups-membership-azure-portal.md).
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Health probes support multiple protocols. The availability of a specific health
| **[Probe types](#probe-types)** | TCP, HTTP, HTTPS | TCP, HTTP | | **[Probe down behavior](#probe-down-behavior)** | All probes down, all TCP flows continue. | All probes down, all TCP flows expire. |
->[!IMPORTANT]
->Load Balancer health probes originate from the IP address 168.63.129.16 and must not be blocked for probes to mark your instance as up. Review [probe source IP address](#probe-source-ip-address) for details. To see this probe traffic within your backend instance, review [the Azure Load Balancer FAQ](./load-balancer-faqs.yml).
->
->
->Regardless of configured time-out threshold, HTTP(S) load balancer health probes will automatically mark the instance as down if the server returns any status code that isn't HTTP 200 OK or if the connection is terminated via TCP reset.
- ## Probe configuration Health probe configuration consists of the following elements:
-* Duration of the interval between individual probes
-
-* Protocol
-
-* Port
-
-* HTTP path to use for HTTP GET when using HTTP(S) probes
-
->[!NOTE]
->A probe definition is not mandatory or checked for when using Azure PowerShell, Azure CLI, Templates or API. Probe validation tests are only done when using the Azure Portal.
+| Health Probe configuration | Details |
+| | |
+| Protocol | Protocol of health probe. This is the protocol type you would like the health probe to leverage. Available options are: TCP, HTTP, HTTPS |
+| Port | Port of the health probe. The destination port you would like the health probe to use when it connects to the virtual machine to check the virtual machine's health status. You must ensure that the virtual machine is also listening on this port (that is, the port is open). |
+| Interval (seconds) | Interval of health probe. The amount of time (in seconds) between consecutive health check attemps to the virtual machine |
## Application signal, detection of the signal, and Load Balancer reaction The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. On the next healthy probe up, the health probe marks your backend pool instances as healthy.
-For example, a health probe set to five seconds. The time at which a probe is sent isn't synchronized with when your application may change state. The total time it takes for your health probe to reflect your application state can fall into one of the two following scenarios:
+For example, if a health probe set to 5 seconds. The time at which a probe is sent isn't synchronized with when your application may change state. The total time it takes for your health probe to reflect your application state can fall into one of the two following scenarios:
1. If your application produces a time-out response just before the next probe arrives, the detection of the events will take 5 seconds plus the duration of the application time-out when the probe arrives. You can assume the detection to take slightly over 5 seconds.- 2. If your application produces a time-out response just after the next probe arrives, the detection of the events won't begin until the probe arrives and times out, plus another 5 seconds. You can assume the detection to take just under 10 seconds.
-For this example, once detection has occurred, the platform takes a small amount of time to react to the change.
-
-The reaction depends on:
-
+For this example, once detection has occurred, the platform takes a small amount of time to react to the change. The reaction depends on:
* When the application changes state * When the change is detected * When the next health probe is sent * When the detection has been communicated across the platform Assume the reaction to a time-out response takes a minimum of 5 seconds and a maximum of 10 seconds to react to the change.- This example is provided to illustrate what is taking place. It's not possible to forecast an exact duration beyond the guidance in the example.
->[!NOTE]
->The health probe will probe all running instances in the backend pool. If an instance is stopped it will not be probed until it has been started again.
- ## Probe types The protocol used by the health probe can be configured to one of the following options:
-* TCP listeners
-
-* HTTP endpoints
-
-* HTTPS endpoints
-
-The available protocols depend on the load balancer SKU used:
- || TCP | HTTP | HTTPS | | | | | | | **Standard SKU** | &#9989; | &#9989; | &#9989; |
The available protocols depend on the load balancer SKU used:
TCP probes initiate a connection by performing a three-way open TCP handshake with the defined port. TCP probes terminate a connection with a four-way close TCP handshake.
-The minimum probe interval is 5 seconds and canΓÇÖt exceed 120 seconds.
- A TCP probe fails when: * The TCP listener on the instance doesn't respond at all during the timeout period. A probe is marked down based on the number of timed-out probe requests, which were configured to go unanswered before marking down the probe.
A TCP probe fails when:
### HTTP/HTTPS probe
->[!NOTE]
->HTTPS probe is only available for [Standard Load Balancer](./load-balancer-overview.md).
-
-HTTP and HTTPS probes build on the TCP probe and issue an HTTP GET with the specified path. Both of these probes support relative paths for the HTTP GET. HTTPS probes are the same as HTTP probes with the addition of a Transport Layer Security (TLS). The health probe is marked up when the instance responds with an HTTP status 200 within the timeout period. The health probe attempts to check the configured health probe port every 15 seconds by default. The minimum probe interval is 5 seconds and canΓÇÖt exceed 120 seconds.
+HTTP and HTTPS probes build on the TCP probe and issue an HTTP GET with the specified path. Both of these probes support relative paths for the HTTP GET. HTTPS probes are the same as HTTP probes with the addition of a Transport Layer Security (TLS). The health probe is marked up when the instance responds with an HTTP status 200 within the timeout period. The health probe attempts to check the configured health probe port every 15 seconds by default.
HTTP / HTTPS probes can be useful to implement your own logic to remove instances from load balancer if the probe port is also the listener for the service. For example, you might decide to remove an instance if it's above 90% CPU and return a non-200 HTTP status.
-> [!NOTE]
-> The HTTPS probe requires the use of certificates based that have a minimum signature hash of SHA256 in the entire chain.
-
-If you use Cloud Services and have web roles that use w3wp.exe, you achieve automatic monitoring of your website. Failures in your website code return a non-200 status to the load balancer probe.
- An HTTP / HTTPS probe fails when: * Probe endpoint returns an HTTP response code other than 200 (for example, 403, 404, or 500). The probe is marked down immediately.
An HTTP / HTTPS probe fails when:
* Probe endpoint closes the connection via a TCP reset.
+> [!NOTE]
+> The HTTPS probe requires the use of certificates based that have a minimum signature hash of SHA256 in the entire chain.
+ ## Probe up behavior TCP, HTTP, and HTTPS health probes are considered healthy and mark the backend endpoint as healthy when:
TCP, HTTP, and HTTPS health probes are considered healthy and mark the backend e
Any backend endpoint that has achieved a healthy state is eligible for receiving new flows.
-> [!NOTE]
-> If the health probe fluctuates, the load balancer waits longer before it puts the backend endpoint back in the healthy state. This extra wait time protects the user and the infrastructure and is an intentional policy.
- ## Probe down behavior ### TCP connections
In addition to load balancer health probes, the [following operations use this I
* Don't enable [TCP timestamps](https://tools.ietf.org/html/rfc1323). TCP timestamps can cause health probes to fail due to the VM's guest OS TCP stack dropping TCP packets. The dropped packets can cause the load balancer to mark the endpoint as down. TCP timestamps are routinely enabled by default on security hardened VM images and must be disabled.
+* Note that a probe definition is not mandatory or checked for when using Azure PowerShell, Azure CLI, Templates or API. Probe validation tests are only done when using the Azure Portal.
+
+* If the health probe fluctuates, the load balancer waits longer before it puts the backend endpoint back in the healthy state. This extra wait time protects the user and the infrastructure and is an intentional policy.
+
+* Ensure your virtual machine instances are running. The health probe will probe all running instances in the backend pool. If an instance is stopped, it will not be probed until it has been started again.
+ ## Monitoring Public and internal [Standard Load Balancer](./load-balancer-overview.md) expose per endpoint and backend endpoint health probe status through [Azure Monitor](./monitor-load-balancer.md). Other Azure services or partner applications can consume these metrics.
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
Previously updated : 08/28/2022 Last updated : 05/05/2023
There are three types of health probes:
| **Probe types** | TCP, HTTP, HTTPS | TCP, HTTP | | **Probe down behavior** | All probes down, all TCP flows continue. | All probes down, all TCP flows expire. |
+Health probes have the following properties:
+
+| Health Probe configuration | Details |
+| | |
+| Name | Name of the health probe. This is a name you get to define for your health probe |
+| Protocol | Protocol of health probe. This is the protocol type you would like the health probe to leverage. Available options are: TCP, HTTP, HTTPS |
+| Port | Port of the health probe. The destination port you would like the health probe to use when it connects to the virtual machine to check the virtual machine's health status. You must ensure that the virtual machine is also listening on this port (that is, the port is open). |
+| Interval (seconds) | Interval of health probe. The amount of time (in seconds) between consecutive health check attemps to the virtual machine |
+| Used by | The list of load balancer rules using this specific health probe. You should have at least one rule using the health probe for it to be effective |
+| Path | The URI used for requesting health status from the virtual machine instance by the health probe (only applicable for HTTP(s) probes).
+ >[!IMPORTANT] >Load Balancer health probes originate from the IP address 168.63.129.16 and must not be blocked for probes to mark your instance as up. To see this probe traffic within your backend instance, review [the Azure Load Balancer FAQ](./load-balancer-faqs.yml). >
machine-learning Concept Automl Forecasting Calendar Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-calendar-features.md
The following table summarizes the holiday features:
Feature name | Description | -- |
-`Holiday`| String feature that specifies whether a date is a regional or national holiday. Days within some range of a holiday are also marked.
+`Holiday`| String feature that specifies whether a date is a national/regional holiday. Days within some range of a holiday are also marked.
`isPaidTimeOff`| Binary feature that takes value 1 if the day is a "paid time-off holiday" in the given country or region. AutoML uses Azure Open Datasets as a source for holiday information. For more information, see the [PublicHolidays](/python/api/azureml-opendatasets/azureml.opendatasets.publicholidays) documentation.
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
$schema: http://azureml/sdk-2-0/Connection.json
type: s3 name: my_s3_connection
-target: https://<mybucket>.amazonaws.com # add the s3 bucket details
+target: <mybucket> # add the s3 bucket details
credentials: type: access_key access_key_id: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX # add access key id
from azure.ai.ml import MLClient
from azure.ai.ml.entities import WorkspaceConnection from azure.ai.ml.entities import AccessKeyConfiguration
-target = "https://<mybucket>.amazonaws.com" # add the s3 bucket details
+target=<mybucket> # add the s3 bucket details
name=<my_s3_connection> # name of the connection
-wps_connection = WorkspaceConnection(name=name,
+wps_connection=WorkspaceConnection(name=name,
type="s3", target= target, credentials= AccessKeyConfiguration(access_key_id="XXXXXX",acsecret_access_key="XXXXXXXX")
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
The following table describes the key attributes of a deployment:
| Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). | | Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+> [!NOTE]
+> The model and container image (as defined in Environment) can be referenced again at any time by the deployment when the instances behind the deployment go through security patches and/or other recovery operations. If you used a registered model or container image in Azure Container Registry for deployment and removed the model or the container image, the deployments relying on these assets can fail when reimaging happens. If you removed the model or the container image, ensure the dependent deployments are re-created or updated with alternative model or container image.
+ # [Azure CLI](#tab/azure-cli) ### Configure a deployment
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Previously updated : 11/28/2022 Last updated : 04/28/2023
The following table shows more limits in the platform. Reach out to the Azure Ma
### Azure Machine Learning managed online endpoints
-Azure Machine Learning managed online endpoints have limits described in the following table.
-
-| **Resource** | **Limit** |
-| | |
-| Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> |
-| Deployment name| Deployment names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> |
-| Number of endpoints per subscription | 50 |
-| Number of deployments per subscription | 200 |
-| Number of deployments per endpoint | 20 |
-| Number of instances per deployment | 20 <sup>2</sup> |
-| Max request time-out at endpoint level | 90 seconds |
-| Total requests per second at endpoint level for all deployments | 500 <sup>3</sup> |
-| Total connections per second at endpoint level for all deployments | 500 <sup>3</sup> |
-| Total connections active at endpoint level for all deployments | 500 <sup>3</sup> |
-| Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>3</sup> |
+Azure Machine Learning managed online endpoints have limits described in the following table. These are regional limits, meaning that you can use up to these limits per each region you are using.
+
+| **Resource** | **Limit** | **Allows exception** |
+| | | |
+| Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> | - |
+| Deployment name| Deployment names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> | - |
+| Number of endpoints per subscription | 50 | Yes |
+| Number of deployments per subscription | 200 | Yes |
+| Number of deployments per endpoint | 20 | Yes |
+| Number of instances per deployment | 20 <sup>2</sup> | Yes |
+| Max request time-out at endpoint level | 90 seconds | - |
+| Total requests per second at endpoint level for all deployments | 500 <sup>3</sup> | Yes |
+| Total connections per second at endpoint level for all deployments | 500 <sup>3</sup> | Yes |
+| Total connections active at endpoint level for all deployments | 500 <sup>3</sup> | Yes |
+| Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>3</sup> | Yes |
<sup>1</sup> Single dashes like, `my-endpoint-name`, are accepted in endpoint and deployment names.
migrate Concepts Migration Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md
A few recommendations:
**Over-provisioned servers** | Export the assessment report and filter for servers with high CPU utilization (%) and memory utilization (%). Solve capacity constraints, prevent overstrained servers from breaking, and increase performance by migrating these servers to Azure. In Azure, use autoscaling capabilities to meet demand.<br/><br/> Analyze assessment reports to investigate storage constraints. Analyze disk IOPS and throughput, and the recommended disk type. - **Start small, then go big**: Start by moving apps and workloads that present minimal risk and complexity, to build confidence in your migration strategy. Analyze Azure Migrate assessment recommendations together with your CMDB repository, to find and migrate dev/test workloads that might be candidates for pilot migrations. Feedback and learnings from pilot migrations can be helpful as you begin migrating production workloads.-- **Comply**: Azure maintains the largest compliance portfolio in the industry, in terms of breadth and depth of offerings. Use compliance requirements to prioritize migrations, so that apps and workloads comply with your national, regional, and industry-specific standards and laws. This is especially true for organizations that deal with business-critical process, hold sensitive information, or are in heavily regulated industries. In these types of organizations, standards and regulations abound, and might change often, being difficult to keep up with.
+- **Comply**: Azure maintains the largest compliance portfolio in the industry, in terms of breadth and depth of offerings. Use compliance requirements to prioritize migrations, so that apps and workloads comply with your national/regional and industry-specific standards and laws. This is especially true for organizations that deal with business-critical process, hold sensitive information, or are in heavily regulated industries. In these types of organizations, standards and regulations abound, and might change often, being difficult to keep up with.
## Finalize the migration plan
open-datasets Dataset Bing Covid 19 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-bing-covid-19.md
Last updated 04/16/2021
Bing COVID-19 data includes confirmed, fatal, and recovered cases from all regions, updated daily. This data is reflected in the [Bing COVID-19 Tracker](https://bing.com/covid).
-Bing collects data from multiple trusted, reliable sources, including the [World Health Organization (WHO)](https://www.who.int/emergencies/diseases/novel-coronavirus-2019), [Centers for Disease Control and Prevention (CDC)](https://www.cdc.gov/coronavirus/2019-ncov/https://docsupdatetracker.net/index.html), national and state public health departments, [BNO News](https://bnonews.com/index.php/2020/04/the-latest-coronavirus-cases/), [24/7 Wall St.](https://247wallst.com/), and [Wikipedia](https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_pandemic).
+Bing collects data from multiple trusted, reliable sources, including the [World Health Organization (WHO)](https://www.who.int/emergencies/diseases/novel-coronavirus-2019), [Centers for Disease Control and Prevention (CDC)](https://www.cdc.gov/coronavirus/2019-ncov/https://docsupdatetracker.net/index.html), national/regional and state public health departments, [BNO News](https://bnonews.com/index.php/2020/04/the-latest-coronavirus-cases/), [24/7 Wall St.](https://247wallst.com/), and [Wikipedia](https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_pandemic).
[!INCLUDE [Open Dataset usage notice](../../includes/open-datasets-usage-note.md)]
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The following table provides information on the Peering Service connectivity par
| Dublin | [Vodafone](https://www.vodafone.com/business/solutions/fixed-connectivity/internet-services#solutions) | | Frankfurt | [Vodafone](https://www.vodafone.com/business/solutions/fixed-connectivity/internet-services#solutions), [Colt](https://www.colt.net/product/cloud-prioritisation/) | | Geneva | [Intercloud](https://intercloud.com/what-we-do/partners/microsoft-saas/), [Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/wireline/ip-plus.html) |
-| Hong Kong | [Colt](https://www.colt.net/product/cloud-prioritisation/), [Singtel](https://www.singtel.com/business/campaign/singnet-cloud-connect-microsoft-direct), [Vodafone](https://www.vodafone.com/business/solutions/fixed-connectivity/internet-services#solutions) |
+| Hong Kong SAR | [Colt](https://www.colt.net/product/cloud-prioritisation/), [Singtel](https://www.singtel.com/business/campaign/singnet-cloud-connect-microsoft-direct), [Vodafone](https://www.vodafone.com/business/solutions/fixed-connectivity/internet-services#solutions) |
| Jakarta | [NTT Communications](https://www.ntt.com/en/services/network/software-defined-network.html) | | Johannesburg | [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html), [Liquid Telecom](https://liquidcloud.africa/keep-expanding-365-direct/) | | Kuala Lumpur | [Telekom Malaysia](https://www.tm.com.my/) |
postgresql Concepts Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compliance.md
Last updated 10/20/2022
## Overview of Compliance Certifications on Microsoft Azure
-Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](/compliance/regulatory/offering-sox) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national, regional, and industry specific regulations and requirements Azure Database for PostgreSQL - Flexible Server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
+Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](/compliance/regulatory/offering-sox) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national/regional and industry specific regulations and requirements Azure Database for PostgreSQL - Flexible Server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of offerings), as well as depth (number of customer-facing services in assessment scope). Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments and customer guidance documents produced by Microsoft. More detailed information about Azure compliance offerings is available from the [Trust](https://www.microsoft.com/trust-center/compliance/compliance-overview) Center. ## Azure Database for PostgreSQL - Flexible Server Compliance Certifications
- Azure Database for PostgreSQL - Flexible Server has achieved a comprehensive set of national, regional, and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
+ Azure Database for PostgreSQL - Flexible Server has achieved a comprehensive set of national/regional and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
> [!div class="mx-tableFixed"] > | **Certification**| **Applicable To** |
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
The following are current limitations for configuring the customer-managed key i
- Once enabled, CMK encryption can't be removed. If customer desires to remove this feature, it can only be done via [restore of the server to non-CMK server](./concepts-backup-restore.md#point-in-time-recovery). -- No support for Azure HSM Key Vault + ## Next steps
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: April 2023 * Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* General availability: [Power BI integration](./connect-with-power-bi-desktop.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* Public preview of: [Power BI integration](./connect-with-power-bi-desktop.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
* Public preview of [Troubleshooting guides](./concepts-troubleshooting-guides.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. ## Release: March 2023
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table for the mobile network site resour
|The name for the site. |**Instance details: Name**| |The region in which you deployed the private mobile network. |**Instance details: Region**| |The packet core in which to create the mobile network site resource. |**Instance details: Packet core name**|
- |The [region code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.|
+ |The [region code name](region-code-names.md) of the region in which you deployed the private mobile network.</br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.|
|The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.| |The service plan for the site that you are creating. See [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/). |**Instance details: Service plan**|
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
Collect all of the following values for the mobile network resource that will re
|The Azure subscription to use to deploy the mobile network resource. You must use the same subscription for all resources in your private mobile network deployment. You identified this subscription in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). |**Project details: Subscription** |The Azure resource group to use to deploy the mobile network resource. You should use a new resource group for this resource. It's useful to include the purpose of this resource group in its name for future identification (for example, *contoso-pmn-rg*). </br></br> Note: We recommend that this resource group is also used when [Collecting the required information for a site](collect-required-information-for-a-site.md). |**Project details: Resource group**| |The name for the private mobile network. |**Instance details: Mobile network name**|
- |The region in which you're deploying the private mobile network. This can be the East US or the West Europe region. |**Instance details: Region**|
+ |The region in which you're deploying the private mobile network. |**Instance details: Region**|
|The mobile country code for the private mobile network. If you do not already have this, contact your national telecom regulator. <br><br> **Note:** For internal private networks you can configure the MCC to 001 (a test value) or 999. |**Network configuration: Mobile country code (MCC)**| |The mobile network code for the private mobile network. If you do not already have this, contact your national telecom regulator. <br><br> **Note:** For internal private networks you can configure the MNC to 01 (a test value), 99 or 999. |**Network configuration: Mobile network code (MNC)**|
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
Collect each of the values in the table below.
|The ID of the Azure subscription in which the Azure resources are deployed. |**SUBSCRIPTION_ID**| |The name of the resource group in which the AKS cluster is deployed. This can be found by using the **Manage** button in the **Azure Kubernetes Service** pane of the Azure portal. |**RESOURCE_GROUP_NAME**| |The name of the AKS cluster resource. This can be found by using the **Manage** button in the **Azure Kubernetes Service** pane of the Azure portal. |**RESOURCE_NAME**|
-|The region in which the Azure resources are deployed. This must match the region into which the mobile network will be deployed, which must be one of the regions supported by AP5GC: **EastUS** or **WestEurope**.</br></br>This value must be the [region's code name](region-code-names.md); see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) for a list of supported regions. |**LOCATION**|
+|The region in which the Azure resources are deployed. This must match the region into which the mobile network will be deployed, which must be one of the regions supported by AP5GC.</br></br>This value must be the [region's code name](region-code-names.md). |**LOCATION**|
|The name of the **Custom location** resource to be created for the AKS cluster. </br></br>This value must start and end with alphanumeric characters, and must contain only alphanumeric characters, `-` or `.`. |**CUSTOM_LOCATION**| ## Install Kubernetes extensions
You should see the new **Custom location** visible as a resource in the Azure po
If you have made an error in the Azure Stack Edge configuration, you can use the portal to remove the AKS cluster (see [Deploy Azure Kubernetes service on Azure Stack Edge](/azure/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge)). You can then modify the settings via the local UI.
-Alternatively, you can perform a full reset using the **Device Reset** blade in the local UI (see [Azure Stack Edge device reset and reactivation](/azure/databox-online/azure-stack-edge-reset-reactivate-device)) and then restart this procedure. In this case, you should also [delete any associated resources](/azure/databox-online/azure-stack-edge-return-device?tabs=azure-portal) left in the Azure Portal after completing the Azure Stack Edge reset. This will include some or all of the following, depending on how far through the process you are:
+Alternatively, you can perform a full reset using the **Device Reset** blade in the local UI (see [Azure Stack Edge device reset and reactivation](/azure/databox-online/azure-stack-edge-reset-reactivate-device)) and then restart this procedure. In this case, you should also [delete any associated resources](/azure/databox-online/azure-stack-edge-return-device?tabs=azure-portal) left in the Azure portal after completing the Azure Stack Edge reset. This will include some or all of the following, depending on how far through the process you are:
- **Azure Stack Edge** resource - Autogenerated **KeyVault** associated with the **Azure Stack Edge** resource
Your packet core should now be in service with the updated ASE configuration. To
Your Azure Stack Edge device is now ready for Azure Private 5G Core. The next step is to collect the information you'll need to deploy your private network. -- [Collect the required information to deploy a private mobile network](./collect-required-information-for-private-mobile-network.md)
+- [Collect the required information to deploy a private mobile network](./collect-required-information-for-private-mobile-network.md)
private-5g-core Configure Service Sim Policy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-sim-policy-arm-template.md
Two Azure resources are defined in the template.
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network. - **Region:** select the region in which you deployed the private mobile network.
- - **Location:** enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*.
+ - **Location:** enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Slice Name:** enter the name of the Slice resource representing your network slice. - **Existing Data Network Name:** enter the name of the data network. This value must match the name you used when creating the data network.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
| **Subscription** | Select the Azure subscription you used to create your private mobile network. | | **Resource group** | Select the resource group containing the mobile network resource representing your private mobile network. | | **Region** | Select the region in which you deployed the private mobile network. |
- | **Location** | Enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*. |
+ | **Location** | Enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. |
| **Existing Mobile Network Name** | Enter the name of the mobile network resource representing your private mobile network. | | **Existing Data Network Name** | Enter the name of the data network. This value must match the name you used when creating the data network. | | **Site Name** | Enter a name for your site.|
private-5g-core Create Slice Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-slice-arm-template.md
Title: Create a slice - ARM template description: This how-to guide shows how to create a slice in your private mobile network using an Azure Resource Manager (ARM) template. --++ Last updated 09/30/2022
The following Azure resource is defined in the template.
|--|--| | **Subscription** | Select the Azure subscription you used to create your private mobile network. | | **Resource group** | Select the resource group containing the mobile network resource representing your private mobile network. |
- | **Region** | Select **East US**. |
- | **Location** | Enter *eastus*. |
+ | **Region** | Select the region in which you deployed the private mobile network. |
+ | **Location** | Enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. |
| **Existing Mobile Network Name** | Enter the name of the Mobile Network resource representing your private mobile network. | | **Slice Name** | Enter the name of the network slice. | | **Sst** | Enter the slice/service type (SST) value. If the slice will be used by 4G UEs, enter a value of 1. |
private-5g-core Deploy Private Mobile Network With Site Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-command-line.md
Last updated 03/15/2023
# Quickstart: Deploy a private mobile network and site - Azure CLI
-Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure CLI to deploy the following.
+Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure CLI to deploy the following resources in the East US Azure region. See [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core) for the Azure regions where Azure Private 5G Core is available.
- A private mobile network. - A site.
private-5g-core Deploy Private Mobile Network With Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-powershell.md
Last updated 03/15/2023
# Quickstart: Deploy a private mobile network and site - Azure PowerShell
-Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure PowerShell to deploy the following.
+Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure PowerShell to deploy the following resources in the East US Azure region. See [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core) for the Azure regions where Azure Private 5G Core is available.
- A private mobile network. - A site.
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
The RAN that you use to broadcast the signal across the enterprise site must com
You should ask your RAN partner for the countries/regions and frequency bands for which the RAN is approved. You may find that you need to use multiple RAN partners to cover the countries and regions in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, we recommend that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
-Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries, spectrum must be obtained from the national regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
+Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries, spectrum must be obtained from the national/regional regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
#### Maximum Transmission Units (MTUs)
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
The following Azure resources are defined in the template.
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network. - **Region:** select the region in which you deployed the private mobile network.
- - **Location:** enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*.
+ - **Location:** enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Sim Policy Name:** enter the name of the SIM policy you want to assign to the SIMs. - **Sim Group Name:** enter the name for the new SIM group.
private-5g-core Region Code Names https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/region-code-names.md
Title: Region code names for Azure Private 5G Core description: Learn about the region code names used for the location parameter in Azure Private 5G Core ARM templates--++
DisplayName Name RegionalDisplayName
East US eastus (US) East US West Europe westeurope (Europe) West Europe ```+
+See [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core) for the Azure regions where Azure Private 5G Core is available.
private-5g-core Region Move Private Mobile Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/region-move-private-mobile-network-resources.md
Title: Move Azure Private 5G Core private mobile network resources between regions description: In this how-to guide, you'll learn how to move your private mobile network resources to a different region.--++ Last updated 01/04/2023
You might move your resources to another region for a number of reasons. For exa
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- Ensure Azure Private 5G Core supports the region to which you want to move your resources. Refer to [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+- Ensure Azure Private 5G Core supports the region to which you want to move your resources. Refer to [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core).
- Verify pricing and charges associated with the target region to which you want to move your resources. - Choose a name for your new resource group in the target region. This must be different to the source region's resource group name. - If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
private-5g-core Reliability Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reliability-private-5g-core.md
Last updated 01/31/2022
This article describes reliability support in Azure Private 5G Core. It covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For an overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+See [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core) for the Azure regions where Azure Private 5G Core is available.
+ ## Availability zone support The Azure Private 5G Core service is automatically deployed as zone-redundant in Azure regions that support availability zones, as listed in [Availability zone service and regional support](../reliability/availability-zones-service-support.md). If a region supports availability zones then all Azure Private 5G Core resources created in a region can be managed from any of the availability zones. No further work is required to configure or manage availability zones. Failover between availability zones is automatic.
-Azure Private 5G Core is currently available in the EastUS and WestEurope regions.
- ### Zone down experience In a zone-wide outage scenario, users should experience no impact because the service will move to take advantage of the healthy zone automatically. At the start of a zone-wide outage, you may see in-progress ARM requests time-out or fail. New requests will be directed to healthy nodes with zero impact on users and any failed operations should be retried. You'll still be able to create new resources and update, monitor and manage existing resources during the outage.
The application ensures that all cloud state is replicated between availability
Azure Private 5G Core is only available in multi-region (3+N) geographies. The service automatically replicates SIM credentials to a backup region in the same geography. This means that there's no loss of data in the event of region failure. Within four hours of the failure, all resources in the failed region are available to view through the Azure portal and ARM tools but will be read-only until the failed region is recovered. the packet running at the Edge continues to operate without interruption and network connectivity will be maintained.
-To view all regions that support Azure Private 5G Core, see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
- ### Cross-region disaster recovery in multi-region geography Microsoft is responsible for outage detection, notification and support for the Azure cloud aspects of the Azure Private 5G Core service.
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/classification-insights.md
Microsoft Purview uses the same sensitive information types as Microsoft 365, al
|**Overview of sources with classifications** |Displays tiles that provide: <br>- The number of subscriptions found in your data <br>- The number of unique classifications found in your data <br>- The number of classified sources found <br>- The number of classified files found <br>- The number of classified tables found | |**Top sources with classified data (last 30 days)** |Shows the trend, over the past 30 days, of the number of sources found with classified data. | |**Top classification categories by sources** |Shows the number of sources found by classification category, such as **Financial** or **Government**. |
- |**Top classifications for files** |Shows the top classifications applied to files in your data, such as credit card numbers or national identification numbers. |
+ |**Top classifications for files** |Shows the top classifications applied to files in your data, such as credit card numbers or national/regional identification numbers. |
|**Top classifications for tables** | Shows the top classifications applied to tables in your data, such as personal identifying information. | | **Classification activity** <br>(files and tables) | Displays separate graphs for files and tables, each showing the number of files or tables classified over the selected timeframe. <br>**Default**: 30 days<br>Select the **Time** filter above the graphs to select a different time frame to display. | | | |
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
PUT https://{purview_account_name}.purview.azure.com/scan/datasources/<data_sour
``` >[!IMPORTANT]
-> The collection_id is the identification for the collection, not the name. For the root collection, the collection_id will be the name of the root collection, but for all sub-collections it is a 5-character ID that can be found in one of two places:
+> The collection_id is not the friendly name for the collection, a 5-character ID. For the root collection, the collection_id will be the name of the collection. For all sub-collections, it is instead the ID that can be found in one of two places:
> > 1. The URL in the Microsoft Purview governance portal. Select the collection, and check the URL to find where it says collection=. That will be your ID. So, for our example below, the Investments collection has the ID 50h55c. > :::image type="content" source="media/register-scan-synapse-workspace/find-collection-id.png" alt-text="Screenshot of the collection ID in the URL." lightbox="media/register-scan-synapse-workspace/find-collection-id.png" :::
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Previously updated : 03/10/2023 Last updated : 05/05/2023
Availability zone support is a property of the App Service plan. The following a
- France Central - Germany West Central - Japan East
+ - Korea Central
- North Europe - Norway East - Qatar Central
sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md
A vnet peering is the most performant way to connect securely and privately two
For SAP RISE/ECS deployments, virtual peering is the preferred way to establish connectivity with customerΓÇÖs existing Azure environment. Both the SAP vnet and customer vnet(s) are protected with network security groups (NSG), enabling communication on SAP and database ports through the vnet peering. Communication between the peered vnets is secured through these NSGs, limiting communication to customerΓÇÖs SAP environment. For details and a list of open ports, contact your SAP representative.
-SAP managed workload should run in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](../../virtual-network/virtual-network-peering-overview.md) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region should match with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national, globally presented company) also require to peer networks globally.
+SAP managed workload should run in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](../../virtual-network/virtual-network-peering-overview.md) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region should match with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national/regional, globally presented company) also require to peer networks globally.
:::image type="complex" source="./media/sap-rise-integration/sap-rise-peering.png" alt-text="Customer peering with SAP RISE/ECS"::: This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet.
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Previously updated : 01/18/2023 Last updated : 05/05/2023 # Preview features in Azure Cognitive Search
Preview features that transition to general availability are removed from this l
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-|| | [**Azure Files indexer**](search-file-storage-integration.md) | Indexer data source | Adds REST API support for creating indexers for [Azure Files](https://azure.microsoft.com/services/storage/files/) | Public preview, [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview). Announced in November 2021. |
-| [**Azure RBAC support (data plane)**](search-security-rbac.md) | Security | Use new built-in roles to control access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview. Use the Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication. Announced in July 2021. |
| [**Search REST API 2021-04-30-Preview**](/rest/api/searchservice/index-preview) | Security | Modifies [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) to support managed identities under Azure Active Directory, for indexers that connect to external data sources. | Public preview, [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview). Announced in May 2021. | | [**Management REST API 2021-04-01-Preview**](/rest/api/searchmanagement/) | Security | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview, [Management REST API](/rest/api/searchmanagement/), API version 2021-04-01-Preview. Announced in May 2021. | | [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | Use the [Reset Documents REST API](/rest/api/searchservice/preview-api/reset-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md
Previously updated : 10/06/2022 Last updated : 05/05/2023 # Features of Azure Cognitive Search
The following table summarizes features by category. For more information about
|-|-| | Data encryption | [**Microsoft-managed encryption-at-rest**](search-security-overview.md#encryption) is built into the internal storage layer and is irrevocable. <br/><br/>[**Customer-managed encryption keys**](search-security-manage-encryption-keys.md) that you create and manage in Azure Key Vault can be used for supplemental encryption of indexes and synonym maps. For services created after August 1 2020, CMK encryption extends to data on temporary disks, for full double encryption of indexed content.| | Endpoint protection | [**IP rules for inbound firewall support**](service-configure-firewall.md) allows you to set up IP ranges over which the search service will accept requests.<br/><br/>[**Create a private endpoint**](service-create-private-endpoint.md) using Azure Private Link to force all requests through a virtual network. |
-| Azure role-based access control | [**RBAC for data plane (preview)**](search-security-rbac.md) refers to the assignment of roles to users and groups in Azure Active Directory to control access to search content and operations. |
+| Inbound access | [**Azure role-based access control**](search-security-rbac.md) assigns roles to users and groups in Azure Active Directory for controlled access to search content and operations. You can also use [**key-based authentication**](search-security-api-keys.md) if you don't have an Azure tenant.|
| Outbound security (indexers) | [**Data access through private endpoints**](search-indexer-howto-access-private.md) allows an indexer to connect to Azure resources that are protected through Azure Private Link.<br/><br/>[**Data access using a trusted identity**](search-howto-managed-identities-data-sources.md) means that connection strings to external data sources can omit user names and passwords. When an indexer connects to the data source, the resource allows the connection if the search service was previously registered as a trusted service. | ## Portal features
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
Previously updated : 09/15/2022 Last updated : 05/05/2023 # Create an index in Azure Cognitive Search
In this article, learn the steps for defining and publishing a search index. Cre
## Prerequisites
-+ Write permissions on the search service. Permission can be granted through an [admin API key](search-security-api-keys.md) on the request. Alternatively, if you're participating in the [role-based access control public preview](search-security-rbac.md), you can issue your request as a member of the Search Contributor role.
++ Write permissions on the search service. Permission can be granted through an [admin API key](search-security-api-keys.md) on the request. Alternatively, if you're using [role-based access control](search-security-rbac.md), you can issue your request as a member of the Search Contributor role. + An external data source that provides the content to be indexed. You should refer to the data source to understand the schema requirements of your search index. Index creation is largely a schema definition exercise. Before creating one, you should have:
security Threat Modeling Tool Cryptography https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-cryptography.md
| **Applicable Technologies** | SQL Azure, OnPrem | | **Attributes** | SQL Version - V12, MsSQL2016 | | **References** | [Always Encrypted (Database Engine)](/sql/relational-databases/security/encryption/always-encrypted-database-engine) |
-| **Steps** | Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (e.g. U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access) |
+| **Steps** | Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national/regional identification numbers (e.g. U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access) |
## <a id="keys-iot"></a>Store Cryptographic Keys securely on IoT Device
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
na Previously updated : 01/20/2023 Last updated : 05/05/2023 # Data encryption models
The Azure services that support each encryption model:
| Azure SQL Managed Instance | Yes | Yes, RSA 3072-bit, including Managed HSM | Yes | | Azure SQL Database for MariaDB | Yes | - | - | | Azure SQL Database for MySQL | Yes | Yes | - |
-| Azure SQL Database for PostgreSQL | Yes | Yes | - |
+| Azure SQL Database for PostgreSQL | Yes | Yes, including Managed HSM | - |
| Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW) only) | Yes | Yes, RSA 3072-bit, including Managed HSM | - | | SQL Server Stretch Database | Yes | Yes, RSA 3072-bit | Yes | | Table Storage | Yes | Yes | Yes |
security Ransomware Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection.md
Last updated 01/10/2022
# Ransomware protection in Azure
-Ransomware and extortion are a high profit, low-cost business, which has a debilitating impact on targeted organizations, national security, economic security, and public health and safety. What started as simple, single-PC ransomware has grown to include a variety of extortion techniques directed at all types of corporate networks and cloud platforms.
+Ransomware and extortion are a high profit, low-cost business, which has a debilitating impact on targeted organizations, national/regional security, economic security, and public health and safety. What started as simple, single-PC ransomware has grown to include a variety of extortion techniques directed at all types of corporate networks and cloud platforms.
To ensure customers running on Azure are protected against ransomware attacks, Microsoft has invested heavily on the security of our cloud platforms, and provides security controls you need to protect your Azure cloud workloads
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md
Over time, this list will change and grow, just as Azure does. Make sure to chec
||--| | [Azure&nbsp;SQL&nbsp;Firewall](/azure/azure-sql/database/firewall-configure)|A network access control feature that protects against network-based attacks to database. | | [Azure&nbsp;SQL&nbsp;Connection Encryption](/azure/azure-sql/database/logins-create-manage)|To provide security, SQL Database controls access with firewall rules limiting connectivity by IP address, authentication mechanisms requiring users to prove their identity, and authorization mechanisms limiting users to specific actions and data. |
-| [Azure SQL Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine)|Protects sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database, Azure SQL Managed Instance, and SQL Server databases. |
+| [Azure SQL Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine)|Protects sensitive data, such as credit card numbers or national/regional identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database, Azure SQL Managed Instance, and SQL Server databases. |
| [Azure&nbsp;SQL&nbsp;transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)| A database security feature that helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. | | [Azure SQL Database Auditing](/azure/azure-sql/database/auditing-overview)|An auditing feature for Azure SQL Database and Azure Synapse Analytics that tracks database events and writes them to an audit log in your Azure storage account, Log Analytics workspace, or Event Hubs. | | [Virtual network rules](/azure/azure-sql/database/vnet-service-endpoint-rule-overview)|A firewall security feature that controls whether the server for your databases and elastic pools in Azure SQL Database or for your dedicated SQL pool (formerly SQL DW) databases in Azure Synapse Analytics accepts communications that are sent from particular subnets in virtual networks. |
sentinel Pulse Connect Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/pulse-connect-secure.md
Configure the facilities you want to collect and their severities.
3. Configure and connect the Pulse Connect Secure
-[Follow the instructions](https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring Syslog.htm) to enable syslog streaming of Pulse Connect Secure logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+[Follow the instructions](https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring%20Syslog.htm) to enable syslog streaming of Pulse Connect Secure logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
sentinel Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md
See the certification on the [SAP Certified Solutions Directory](https://www.sap
## Trademark attribution
-SAP S/4HANA and SAP are trademarks or registered trademarks of SAP SE or its affiliates in Germany and in other countries. 
+SAP S/4HANA and SAP are trademarks or registered trademarks of SAP SE or its affiliates in Germany and in other countries/regions. 
## Next steps
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
However, in some scenarios, Availability Zones can be leveraged for Disaster Rec
- Many customers who had a metro Disaster Recovery strategy while hosting applications on-premises sometimes look to mimic this strategy once they migrate applications over to Azure. These customers acknowledge the fact that metro Disaster Recovery strategy may not work in case of a large-scale physical disaster and accept this risk. For such customers, Zone to Zone Disaster Recovery can be used as a Disaster Recovery option. - Many other customers have complicated networking infrastructure and do not wish to recreate it in a secondary region due to the associated cost and complexity. Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration much simpler. Such customers prefer simplicity and can also use Availability Zones for Disaster Recovery.-- In some regions that do not have a paired region within the same legal jurisdiction (for example, Southeast Asia), Zone to Zone Disaster Recovery can serve as the de-facto Disaster Recovery solution as it helps ensure legal compliance, since your applications and data do not move across national boundaries.
+- In some regions that do not have a paired region within the same legal jurisdiction (for example, Southeast Asia), Zone to Zone Disaster Recovery can serve as the de-facto Disaster Recovery solution as it helps ensure legal compliance, since your applications and data do not move across national/regional boundaries.
- Zone to Zone Disaster Recovery implies replication of data across shorter distances when compared with Azure to Azure Disaster Recovery and therefore, you may see lower latency and consequently lower RPO. While these are strong advantages, there is a possibility that Zone to Zone Disaster Recovery may fall short of resilience requirements in the event of a region-wide natural disaster.
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
If you plan to refer to the cold tier by using code in a custom application, you
| SDK | Minimum version | |||
-| [.NET](/dotnet/api/azure.storage.blobs) | 12.15.0-beta.1 |
-| [Java](/java/api/overview/azure/storage-blob-readme) | 12.15.0-beta.1 |
-| [Python](/python/api/azure-storage-blob/) | 12.15.0b1 |
-| [JavaScript](/javascript/api/preview-docs/@azure/storage-blob/) | 12.13.0-beta.1 |
+| [.NET](/dotnet/api/azure.storage.blobs) | 12.15.0 |
+| [Java](/java/api/overview/azure/storage-blob-readme) | 12.15.0 |
+| [Python](/python/api/azure-storage-blob/) | 12.15.0 |
+| [JavaScript](/javascript/api/preview-docs/@azure/storage-blob/) | 12.13.0 |
+
+> [!NOTE]
+> If you plan to refer to the cold tier when using the AzCopy tool, make sure to install AzCopy version 12.18.0 or later.
## Feature support
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 01/25/2023 Last updated : 05/04/2023
If the replication status for a blob in the source account indicates failure, th
## Billing
-Object replication incurs additional costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
+There's not cost to configure object replication. This includes the task of enabling change feed, enabling versioning, as well as adding replication policies. However, object replication incurs costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
+
+Here's a breakdown of the costs. To find the price of each cost component, see [Azure Blob Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+
+| Cost to update a blob in the source account | Cost to replicate data in the destination account |
+|||
+|Transaction cost of a write operation|Transaction cost to read a change feed record|
+|Storage cost of the blob and each blob version<sup>1</sup>|Transaction cost to read the blob and blob versions<sup>2</sup>|
+|Cost to add a change feed record|Transaction cost to write the blob and blob versions<sup>2</sup>|
+||Storage cost of the blob and each blob version<sup>1</sup>|
+||Cost of network egress<sup>2</sup>|
++
+<sup>1</sup> See [Blob versioning pricing and Billing](versioning-overview.md#pricing-and-billing).
+
+<sup>2</sup> This includes only blob versions created since the last replication completed.
+
+<sup>3</sup> See [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
+ ## Next steps
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
Title: Migrate applications to use passwordless authentication with Azure Storage
+ Title: Migrate applications to use passwordless authentication with Azure Blob Storage
description: Learn to migrate existing applications away from Shared Key authorization with the account key to instead use Azure AD and Azure RBAC for enhanced security.
-# Migrate an application to use passwordless connections with Azure Storage
+# Migrate an application to use passwordless connections with Azure Blob Storage
-Application requests to Azure Storage must be authenticated using either account access keys or passwordless connections. However, you should prioritize passwordless connections in your applications when possible. Traditional authentication methods that use passwords or secret keys create security risks and complications. Visit the [passwordless connections for Azure services](/azure/developer/intro/passwordless-overview) hub to learn more about the advantages of moving to passwordless connections.
-
-The following tutorial explains how to migrate an existing application to connect to Azure Storage to use passwordless connections instead of a key-based solution. These same migration steps should apply whether you're using access keys directly, or through connection strings.
## Configure roles and users for local development authentication
storage Storage Compliance Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-compliance-offerings.md
Title: Azure Storage compliance offerings
-description: Read a summary of compliance offerings on Azure Storage for national, regional, and industry-specific requirements governing the collection and usage of data.
+description: Read a summary of compliance offerings on Azure Storage for national/regional and industry-specific requirements governing the collection and usage of data.
storage Files Samples Dotnet V11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-samples-dotnet-v11.md
+
+ Title: Azure File Share code samples using .NET version 11.x client libraries
+
+description: View code samples that use the Azure File Share client library for .NET version 11.x.
+++++ Last updated : 05/05/2023+++
+# Azure File Share code samples using .NET version 11.x client libraries
+
+This article shows code samples that use version 11.x of the Azure File Share client library for .NET.
++
+## Prerequisites
+
+Install these packages in your project directory:
+
+- **Microsoft.Azure.Storage.Common**
+- **Microsoft.Azure.Storage.File**
+- **Microsoft.Azure.ConfigurationManager**
+
+Add the following `using` directives:
+
+```csharp
+using Microsoft.Azure; // Namespace for Azure Configuration Manager
+using Microsoft.Azure.Storage; // Namespace for Storage Client Library
+using Microsoft.Azure.Storage.Blob; // Namespace for Azure Blobs
+using Microsoft.Azure.Storage.File; // Namespace for Azure Files
+```
+
+## Access the file share
+
+Related article: [Develop for Azure Files with .NET](storage-dotnet-how-to-use-files.md)
+
+Add the following code to access the file share:
+
+```csharp
+// Create a CloudFileClient object for credentialed access to Azure Files.
+CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
+
+// Get a reference to the file share we created previously.
+CloudFileShare share = fileClient.GetShareReference("logs");
+
+// Ensure that the share exists.
+if (share.Exists())
+{
+ // Get a reference to the root directory for the share.
+ CloudFileDirectory rootDir = share.GetRootDirectoryReference();
+
+ // Get a reference to the directory we created previously.
+ CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
+
+ // Ensure that the directory exists.
+ if (sampleDir.Exists())
+ {
+ // Get a reference to the file we created previously.
+ CloudFile file = sampleDir.GetFileReference("Log1.txt");
+
+ // Ensure that the file exists.
+ if (file.Exists())
+ {
+ // Write the contents of the file to the console window.
+ Console.WriteLine(file.DownloadTextAsync().Result);
+ }
+ }
+}
+```
+
+## Set the maximum size for a file share
+
+Related article: [Develop for Azure Files with .NET](storage-dotnet-how-to-use-files.md)
+
+Beginning with version 5.x of the Azure Files client library, you can set the quota (maximum size) for a file share. You can also check to see how much data is currently stored on the share.
+
+Setting the quota for a share limits the total size of the files stored on the share. If the total size of files on the share exceeds the quota, clients can't increase the size of existing files. Clients also can't create new files, unless those files are empty.
+
+The example below shows how to check the current usage for a share and how to set the quota for the share.
+
+```csharp
+// Parse the connection string for the storage account.
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
+ Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
+
+// Create a CloudFileClient object for credentialed access to Azure Files.
+CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
+
+// Get a reference to the file share we created previously.
+CloudFileShare share = fileClient.GetShareReference("logs");
+
+// Ensure that the share exists.
+if (share.Exists())
+{
+ // Check current usage stats for the share.
+ // Note that the ShareStats object is part of the protocol layer for the File service.
+ Microsoft.Azure.Storage.File.Protocol.ShareStats stats = share.GetStats();
+ Console.WriteLine("Current share usage: {0} GiB", stats.Usage.ToString());
+
+ // Specify the maximum size of the share, in GiB.
+ // This line sets the quota to be 10 GiB greater than the current usage of the share.
+ share.Properties.Quota = 10 + stats.Usage;
+ share.SetProperties();
+
+ // Now check the quota for the share. Call FetchAttributes() to populate the share's properties.
+ share.FetchAttributes();
+ Console.WriteLine("Current share quota: {0} GiB", share.Properties.Quota);
+}
+```
+
+### Generate a shared access signature for a file or file share
+
+Beginning with version 5.x of the Azure Files client library, you can generate a shared access signature (SAS) for a file share or for an individual file.
+
+You can also create a stored access policy on a file share to manage shared access signatures. We recommend creating a stored access policy because it lets you revoke the SAS if it becomes compromised. The following example creates a stored access policy on a share. The example uses that policy to provide the constraints for a SAS on a file in the share.
+
+```csharp
+// Parse the connection string for the storage account.
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
+ Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
+
+// Create a CloudFileClient object for credentialed access to Azure Files.
+CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
+
+// Get a reference to the file share we created previously.
+CloudFileShare share = fileClient.GetShareReference("logs");
+
+// Ensure that the share exists.
+if (share.Exists())
+{
+ string policyName = "sampleSharePolicy" + DateTime.UtcNow.Ticks;
+
+ // Create a new stored access policy and define its constraints.
+ SharedAccessFilePolicy sharedPolicy = new SharedAccessFilePolicy()
+ {
+ SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
+ Permissions = SharedAccessFilePermissions.Read | SharedAccessFilePermissions.Write
+ };
+
+ // Get existing permissions for the share.
+ FileSharePermissions permissions = share.GetPermissions();
+
+ // Add the stored access policy to the share's policies. Note that each policy must have a unique name.
+ permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
+ share.SetPermissions(permissions);
+
+ // Generate a SAS for a file in the share and associate this access policy with it.
+ CloudFileDirectory rootDir = share.GetRootDirectoryReference();
+ CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
+ CloudFile file = sampleDir.GetFileReference("Log1.txt");
+ string sasToken = file.GetSharedAccessSignature(null, policyName);
+ Uri fileSasUri = new Uri(file.StorageUri.PrimaryUri.ToString() + sasToken);
+
+ // Create a new CloudFile object from the SAS, and write some text to the file.
+ CloudFile fileSas = new CloudFile(fileSasUri);
+ fileSas.UploadText("This write operation is authorized via SAS.");
+ Console.WriteLine(fileSas.DownloadText());
+}
+```
+
+## Copy files
+
+Related article: [Develop for Azure Files with .NET](storage-dotnet-how-to-use-files.md)
+
+Beginning with version 5.x of the Azure Files client library, you can copy a file to another file, a file to a blob, or a blob to a file.
+
+You can also use AzCopy to copy one file to another or to copy a blob to a file or the other way around. See [Get started with AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json).
+
+> [!NOTE]
+> If you are copying a blob to a file, or a file to a blob, you must use a shared access signature (SAS) to authorize access to the source object, even if you are copying within the same storage account.
+
+### Copy a file to another file
+
+The following example copies a file to another file in the same share. You can use [Shared Key authentication](/rest/api/storageservices/authorize-with-shared-key) to do the copy because this operation copies files within the same storage account.
+
+```csharp
+// Parse the connection string for the storage account.
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
+ Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
+
+// Create a CloudFileClient object for credentialed access to Azure Files.
+CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
+
+// Get a reference to the file share we created previously.
+CloudFileShare share = fileClient.GetShareReference("logs");
+
+// Ensure that the share exists.
+if (share.Exists())
+{
+ // Get a reference to the root directory for the share.
+ CloudFileDirectory rootDir = share.GetRootDirectoryReference();
+
+ // Get a reference to the directory we created previously.
+ CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
+
+ // Ensure that the directory exists.
+ if (sampleDir.Exists())
+ {
+ // Get a reference to the file we created previously.
+ CloudFile sourceFile = sampleDir.GetFileReference("Log1.txt");
+
+ // Ensure that the source file exists.
+ if (sourceFile.Exists())
+ {
+ // Get a reference to the destination file.
+ CloudFile destFile = sampleDir.GetFileReference("Log1Copy.txt");
+
+ // Start the copy operation.
+ destFile.StartCopy(sourceFile);
+
+ // Write the contents of the destination file to the console window.
+ Console.WriteLine(destFile.DownloadText());
+ }
+ }
+}
+```
+
+### Copy a file to a blob
+
+The following example creates a file and copies it to a blob within the same storage account. The example creates a SAS for the source file, which the service uses to authorize access to the source file during the copy operation.
+
+```csharp
+// Parse the connection string for the storage account.
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
+ Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
+
+// Create a CloudFileClient object for credentialed access to Azure Files.
+CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
+
+// Create a new file share, if it does not already exist.
+CloudFileShare share = fileClient.GetShareReference("sample-share");
+share.CreateIfNotExists();
+
+// Create a new file in the root directory.
+CloudFile sourceFile = share.GetRootDirectoryReference().GetFileReference("sample-file.txt");
+sourceFile.UploadText("A sample file in the root directory.");
+
+// Get a reference to the blob to which the file will be copied.
+CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
+CloudBlobContainer container = blobClient.GetContainerReference("sample-container");
+container.CreateIfNotExists();
+CloudBlockBlob destBlob = container.GetBlockBlobReference("sample-blob.txt");
+
+// Create a SAS for the file that's valid for 24 hours.
+// Note that when you are copying a file to a blob, or a blob to a file, you must use a SAS
+// to authorize access to the source object, even if you are copying within the same
+// storage account.
+string fileSas = sourceFile.GetSharedAccessSignature(new SharedAccessFilePolicy()
+{
+ // Only read permissions are required for the source file.
+ Permissions = SharedAccessFilePermissions.Read,
+ SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24)
+});
+
+// Construct the URI to the source file, including the SAS token.
+Uri fileSasUri = new Uri(sourceFile.StorageUri.PrimaryUri.ToString() + fileSas);
+
+// Copy the file to the blob.
+destBlob.StartCopy(fileSasUri);
+
+// Write the contents of the file to the console window.
+Console.WriteLine("Source file contents: {0}", sourceFile.DownloadText());
+Console.WriteLine("Destination blob contents: {0}", destBlob.DownloadText());
+```
+
+You can copy a blob to a file in the same way. If the source object is a blob, then create a SAS to authorize access to that blob during the copy operation.
+
+## Share snapshots
+
+Related article: [Develop for Azure Files with .NET](storage-dotnet-how-to-use-files.md)
+
+Beginning with version 8.5 of the Azure Files client library, you can create a share snapshot. You can also list or browse share snapshots and delete share snapshots. Once created, share snapshots are read-only.
+
+### Create share snapshots
+
+The following example creates a file share snapshot:
+
+```csharp
+storageAccount = CloudStorageAccount.Parse(ConnectionString);
+fClient = storageAccount.CreateCloudFileClient();
+string baseShareName = "myazurefileshare";
+CloudFileShare myShare = fClient.GetShareReference(baseShareName);
+var snapshotShare = myShare.Snapshot();
+
+```
+
+### List share snapshots
+
+The following example lists the snapshots on a share:
+
+```csharp
+var shares = fClient.ListShares(baseShareName, ShareListingDetails.All);
+```
+
+### List files and directories within share snapshots
+
+The following example browses files and directories within share snapshots:
+
+```csharp
+CloudFileShare mySnapshot = fClient.GetShareReference(baseShareName, snapshotTime);
+var rootDirectory = mySnapshot.GetRootDirectoryReference();
+var items = rootDirectory.ListFilesAndDirectories();
+```
+
+### Restore file shares or files from share snapshots
+
+Taking a snapshot of a file share enables you to recover individual files or the entire file share.
+
+You can restore a file from a file share snapshot by querying the share snapshots of a file share. You can then retrieve a file that belongs to a particular share snapshot. Use that version to directly read or to restore the file.
+
+```csharp
+CloudFileShare liveShare = fClient.GetShareReference(baseShareName);
+var rootDirOfliveShare = liveShare.GetRootDirectoryReference();
+var dirInliveShare = rootDirOfliveShare.GetDirectoryReference(dirName);
+var fileInliveShare = dirInliveShare.GetFileReference(fileName);
+
+CloudFileShare snapshot = fClient.GetShareReference(baseShareName, snapshotTime);
+var rootDirOfSnapshot = snapshot.GetRootDirectoryReference();
+var dirInSnapshot = rootDirOfSnapshot.GetDirectoryReference(dirName);
+var fileInSnapshot = dir1InSnapshot.GetFileReference(fileName);
+
+string sasContainerToken = string.Empty;
+SharedAccessFilePolicy sasConstraints = new SharedAccessFilePolicy();
+sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24);
+sasConstraints.Permissions = SharedAccessFilePermissions.Read;
+
+//Generate the shared access signature on the container, setting the constraints directly on the signature.
+sasContainerToken = fileInSnapshot.GetSharedAccessSignature(sasConstraints);
+
+string sourceUri = (fileInSnapshot.Uri.ToString() + sasContainerToken + "&" + fileInSnapshot.SnapshotTime.ToString()); ;
+fileInliveShare.StartCopyAsync(new Uri(sourceUri));
+```
+
+### Delete share snapshots
+
+The following example deletes a file share snapshot:
+
+```csharp
+CloudFileShare mySnapshot = fClient.GetShareReference(baseShareName, snapshotTime); mySnapshot.Delete(null, null, null);
+```
+
+## Troubleshoot Azure Files by using metrics
+
+Related article: [Develop for Azure Files with .NET](storage-dotnet-how-to-use-files.md)
+
+Azure Storage Analytics supports metrics for Azure Files. With metrics data, you can trace requests and diagnose issues.
+
+You can enable metrics for Azure Files from the [Azure portal](https://portal.azure.com). You can also enable metrics programmatically by calling the [Set File Service Properties](/rest/api/storageservices/set-file-service-properties) operation with the REST API or one of its analogs in the Azure Files client library.
+
+The following code example shows how to use the .NET client library to enable metrics for Azure Files.
+
+First, add the following `using` directives to your *Program.cs* file, along with the ones you added above:
+
+```csharp
+using Microsoft.Azure.Storage.File.Protocol;
+using Microsoft.Azure.Storage.Shared.Protocol;
+```
+
+Although Azure Blobs, Azure Tables, and Azure Queues use the shared `ServiceProperties` type in the `Microsoft.Azure.Storage.Shared.Protocol` namespace, Azure Files uses its own type, the `FileServiceProperties` type in the `Microsoft.Azure.Storage.File.Protocol` namespace. You must reference both namespaces from your code, however, for the following code to compile.
+
+```csharp
+// Parse your storage connection string from your application's configuration file.
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
+ Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
+// Create the File service client.
+CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
+
+// Set metrics properties for File service.
+// Note that the File service currently uses its own service properties type,
+// available in the Microsoft.Azure.Storage.File.Protocol namespace.
+fileClient.SetServiceProperties(new FileServiceProperties()
+{
+ // Set hour metrics
+ HourMetrics = new MetricsProperties()
+ {
+ MetricsLevel = MetricsLevel.ServiceAndApi,
+ RetentionDays = 14,
+ Version = "1.0"
+ },
+ // Set minute metrics
+ MinuteMetrics = new MetricsProperties()
+ {
+ MetricsLevel = MetricsLevel.ServiceAndApi,
+ RetentionDays = 7,
+ Version = "1.0"
+ }
+});
+
+// Read the metrics properties we just set.
+FileServiceProperties serviceProperties = fileClient.GetServiceProperties();
+Console.WriteLine("Hour metrics:");
+Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
+Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
+Console.WriteLine(serviceProperties.HourMetrics.Version);
+Console.WriteLine();
+Console.WriteLine("Minute metrics:");
+Console.WriteLine(serviceProperties.MinuteMetrics.MetricsLevel);
+Console.WriteLine(serviceProperties.MinuteMetrics.RetentionDays);
+Console.WriteLine(serviceProperties.MinuteMetrics.Version);
+```
storage Files Samples Java V8 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-samples-java-v8.md
+
+ Title: Azure File Share code samples using Java version 8 client libraries
+
+description: View code samples that use the Azure File Share client library for Java version 8.
+++++ Last updated : 04/26/2023+++
+# Azure File Share code samples using Java version 8 client libraries
+
+This article shows code samples that use version 8 of the Azure File Share client library for Java.
++
+## Prerequisites
+
+To use the Azure File Share client library, add the following `import` directives:
+
+```java
+// Include the following imports to use Azure Files APIs v11
+import com.microsoft.azure.storage.*;
+import com.microsoft.azure.storage.file.*;
+```
+
+## Access an Azure file share
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#access-an-azure-file-share)
+
+To access your storage account, use the **CloudStorageAccount** object, passing the connection string to its **parse** method.
+
+```java
+// Use the CloudStorageAccount object to connect to your storage account
+try {
+ CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
+} catch (InvalidKeyException invalidKey) {
+ // Handle the exception
+}
+```
+
+**CloudStorageAccount.parse** throws an InvalidKeyException so you'll need to put it inside a try/catch block.
+
+## Create a file share
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#create-a-file-share)
+
+All files and directories in Azure Files are stored in a container called a share.
+
+To obtain access to a share and its contents, create an Azure Files client. The following code example shows how to create a file share:
+
+```java
+// Create the Azure Files client.
+CloudFileClient fileClient = storageAccount.createCloudFileClient();
+```
+
+Using the Azure Files client, you can then obtain a reference to a share.
+
+```java
+// Get a reference to the file share
+CloudFileShare share = fileClient.getShareReference("sampleshare");
+```
+
+To actually create the share, use the **createIfNotExists** method of the **CloudFileShare** object.
+
+```java
+if (share.createIfNotExists()) {
+ System.out.println("New share created");
+}
+```
+
+At this point, **share** holds a reference to a share named **sample share**.
+
+## Delete a file share
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#delete-a-file-share)
+
+The following sample code deletes a file share.
+
+Delete a share by calling the **deleteIfExists** method on a **CloudFileShare** object.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the file client.
+ CloudFileClient fileClient = storageAccount.createCloudFileClient();
+
+ // Get a reference to the file share
+ CloudFileShare share = fileClient.getShareReference("sampleshare");
+
+ if (share.deleteIfExists()) {
+ System.out.println("sampleshare deleted");
+ }
+} catch (Exception e) {
+ e.printStackTrace();
+}
+```
+
+## Create a directory
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#create-a-directory)
+
+You can organize storage by putting files inside subdirectories instead of having all of them in the root directory.
+
+The following code creates a subdirectory named **sampledir** under the root directory:
+
+```java
+//Get a reference to the root directory for the share.
+CloudFileDirectory rootDir = share.getRootDirectoryReference();
+
+//Get a reference to the sampledir directory
+CloudFileDirectory sampleDir = rootDir.getDirectoryReference("sampledir");
+
+if (sampleDir.createIfNotExists()) {
+ System.out.println("sampledir created");
+} else {
+ System.out.println("sampledir already exists");
+}
+```
+
+## Delete a directory
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#delete-a-directory)
+
+The following code example shows how to delete a directory. You can't delete a directory that still contains files or subdirectories.
+
+```java
+// Get a reference to the root directory for the share.
+CloudFileDirectory rootDir = share.getRootDirectoryReference();
+
+// Get a reference to the directory you want to delete
+CloudFileDirectory containerDir = rootDir.getDirectoryReference("sampledir");
+
+// Delete the directory
+if ( containerDir.deleteIfExists() ) {
+ System.out.println("Directory deleted");
+}
+```
+
+## Enumerate files and directories in an Azure file share
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#enumerate-files-and-directories-in-an-azure-file-share)
+
+Get a list of files and directories by calling **listFilesAndDirectories** on a **CloudFileDirectory** reference. The method returns a list of **ListFileItem** objects on which you can iterate.
+
+The following code lists files and directories inside the root directory:
+
+```java
+//Get a reference to the root directory for the share.
+CloudFileDirectory rootDir = share.getRootDirectoryReference();
+
+for ( ListFileItem fileItem : rootDir.listFilesAndDirectories() ) {
+ System.out.println(fileItem.getUri());
+}
+```
+
+## Upload a file
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#upload-a-file)
+
+Get a reference to the directory where the file will be uploaded by calling the **getRootDirectoryReference** method on the share object.
+
+```java
+//Get a reference to the root directory for the share.
+CloudFileDirectory rootDir = share.getRootDirectoryReference();
+```
+
+Now that you have a reference to the root directory of the share, you can upload a file onto it using the following code:
+
+```java
+// Define the path to a local file.
+final String filePath = "C:\\temp\\Readme.txt";
+
+CloudFile cloudFile = rootDir.getFileReference("Readme.txt");
+cloudFile.uploadFromFile(filePath);
+```
+
+## Download a file
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#download-a-file)
+
+The following example downloads SampleFile.txt and displays its contents:
+
+```java
+//Get a reference to the root directory for the share.
+CloudFileDirectory rootDir = share.getRootDirectoryReference();
+
+//Get a reference to the directory that contains the file
+CloudFileDirectory sampleDir = rootDir.getDirectoryReference("sampledir");
+
+//Get a reference to the file you want to download
+CloudFile file = sampleDir.getFileReference("SampleFile.txt");
+
+//Write the contents of the file to the console.
+System.out.println(file.downloadText());
+```
+
+## Delete a file
+
+Related article: [Develop for Azure Files with Java](storage-java-how-to-use-file-storage.md#delete-a-file)
+
+The following code deletes a file named SampleFile.txt stored inside a directory named **sampledir**:
+
+```java
+// Get a reference to the root directory for the share.
+CloudFileDirectory rootDir = share.getRootDirectoryReference();
+
+// Get a reference to the directory where the file to be deleted is in
+CloudFileDirectory containerDir = rootDir.getDirectoryReference("sampledir");
+
+String filename = "SampleFile.txt"
+CloudFile file;
+
+file = containerDir.getFileReference(filename)
+if ( file.deleteIfExists() ) {
+ System.out.println(filename + " was deleted");
+}
+```
storage Files Samples Python V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-samples-python-v2.md
+
+ Title: Azure File Share code samples using Python version 2 client libraries
+
+description: View code samples that use the Azure File Share client library for Python version 2.
+++++ Last updated : 05/05/2023+++
+# Azure File Share code samples using Python version 2 client libraries
+
+This article shows code samples that use version 2 of the Azure File Share client library for Python.
++
+## Prerequisites
+
+Install the following package using `pip install`:
+
+```console
+pip install azure-storage-file
+```
+
+Add the following `import` statement:
+
+```python
+from azure.storage.file import FileService
+```
+
+## Create an Azure file share
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#create-an-azure-file-share)
+
+The following code example uses a [FileService](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice) object to create the share if it doesn't exist.
+
+```python
+file_service.create_share('myshare')
+```
+
+## Create a directory
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#create-a-directory)
+
+You can organize storage by putting files inside subdirectories instead of having all of them in the root directory.
+
+The code below will create a subdirectory named *sampledir* under the root directory.
+
+```python
+file_service.create_directory('myshare', 'sampledir')
+```
+
+## Upload a file
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#upload-a-file)
+
+In this section, you'll learn how to upload a file from local storage into Azure Files.
+
+An Azure file share contains, at the least, a root directory where files can reside. To create a file and upload data, use any of the following methods:
+
+- [create_file_from_path](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-path)
+- [create_file_from_stream](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-stream)
+- [create_file_from_bytes](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-bytes)
+- [create_file_from_text](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-text)
+
+These methods perform the necessary chunking when the size of the data exceeds 64 MiB.
+
+`create_file_from_path` uploads the contents of a file from the specified path, and `create_file_from_stream` uploads the contents from an already opened file/stream. `create_file_from_bytes` uploads an array of bytes, and `create_file_from_text` uploads the specified text value using the specified encoding (defaults to UTF-8).
+
+The following example uploads the contents of the *sunset.png* file into the **myfile** file.
+
+```python
+from azure.storage.file import ContentSettings
+file_service.create_file_from_path(
+ 'myshare',
+ None, # We want to create this file in the root directory, so we specify None for the directory_name
+ 'myfile',
+ 'sunset.png',
+ content_settings=ContentSettings(content_type='image/png'))
+```
+
+## Enumerate files and directories in an Azure file share
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#enumerate-files-and-directories-in-an-azure-file-share)
+
+To list the files and directories in a share, use the [list_directories_and_files](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-list-directories-and-files) method. This method returns a generator. The following code outputs the **name** of each file and directory in a share to the console.
+
+```python
+generator = file_service.list_directories_and_files('myshare')
+for file_or_dir in generator:
+ print(file_or_dir.name)
+```
+
+## Download a file
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#download-a-file)
+
+To download data from a file, use any of the following methods:
+
+- [get_file_to_path](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-get-file-to-path)
+- [get_file_to_stream](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#get-file-to-stream-share-name--directory-name--file-name--stream--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--snapshot-none-)
+- [get_file_to_bytes](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-get-file-to-bytes)
+- [get_file_to_text](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-get-file-to-text)
+
+These methods perform the necessary chunking when the size of the data exceeds 64 MiB.
+
+The following example demonstrates using `get_file_to_path` to download the contents of the **myfile** file and store it to the *out-sunset.png* file.
+
+```python
+file_service.get_file_to_path('myshare', None, 'myfile', 'out-sunset.png')
+```
+
+## Create a share snapshot
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#create-a-share-snapshot)
+
+You can create a point in time copy of your entire file share.
+
+```python
+snapshot = file_service.snapshot_share(share_name)
+snapshot_id = snapshot.snapshot
+```
+
+**Create share snapshot with metadata**
+
+```python
+metadata = {"foo": "bar"}
+snapshot = file_service.snapshot_share(share_name, metadata=metadata)
+```
+
+## List shares and snapshots
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#list-shares-and-snapshots)
+
+You can list all the snapshots for a particular share.
+
+```python
+shares = list(file_service.list_shares(include_snapshots=True))
+```
+
+## Browse share snapshot
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#browse-share-snapshot)
+
+You can browse each share snapshot to retrieve files and directories from that point in time.
+
+```python
+directories_and_files = list(
+ file_service.list_directories_and_files(share_name, snapshot=snapshot_id))
+```
+
+## Get file from share snapshot
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#get-file-from-share-snapshot)
+
+You can download a file from a share snapshot. This enables you to restore a previous version of a file.
+
+```python
+with open(FILE_PATH, 'wb') as stream:
+ file = file_service.get_file_to_stream(
+ share_name, directory_name, file_name, stream, snapshot=snapshot_id)
+```
+
+## Delete a single share snapshot
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#delete-a-single-share-snapshot)
+
+You can delete a single share snapshot.
+
+```python
+file_service.delete_share(share_name, snapshot=snapshot_id)
+```
+
+## Delete a file
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#delete-a-file)
+
+To delete a file, call [delete_file](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#delete-file-share-name--directory-name--file-name--timeout-none-).
+
+The following code example shows how to delete a file:
+
+```python
+file_service.delete_file('myshare', None, 'myfile')
+```
+
+## Delete share when share snapshots exist
+
+Related article: [Develop for Azure Files with Python](storage-python-how-to-use-file-storage.md#delete-share-when-share-snapshots-exist)
+
+A share that contains snapshots cannot be deleted unless all the snapshots are deleted first.
+
+The following code example shows how to delete a share:
+
+```python
+file_service.delete_share(share_name, delete_snapshots=DeleteSnapshot.Include)
+```
storage Storage Dotnet How To Use Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-dotnet-how-to-use-files.md
Add all the code examples in this article to the `Program` class in the *Program
Refer to these packages in your project:
-# [Azure \.NET SDK v12](#tab/dotnet)
- - [Azure core library for .NET](https://www.nuget.org/packages/Azure.Core/): This package is the implementation of the Azure client pipeline. - [Azure Storage Blob client library for .NET](https://www.nuget.org/packages/Azure.Storage.Blobs/): This package provides programmatic access to blob resources in your storage account. - [Azure Storage Files client library for .NET](https://www.nuget.org/packages/Azure.Storage.Files.Shares/): This package provides programmatic access to file resources in your storage account.
You can use NuGet to obtain the packages. Follow these steps:
- **Azure.Storage.Files.Shares** - **System.Configuration.ConfigurationManager**
-# [Azure \.NET SDK v11](#tab/dotnetv11)
--- [Microsoft Azure Storage common library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Storage.Common/): This package provides programmatic access to common resources in your storage account.-- [Microsoft Azure Storage Blob library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Storage.Blob/): This package provides programmatic access to blob resources in your storage account.-- [Microsoft Azure Storage File library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Storage.File/): This package provides programmatic access to file resources in your storage account.-- [Microsoft Azure Configuration Manager library for .NET](https://www.nuget.org/packages/Microsoft.Azure.ConfigurationManager/): This package provides a class for parsing a connection string in a configuration file, wherever your application runs.-
-You can use NuGet to obtain the packages. Follow these steps:
-
-1. In **Solution Explorer**, right-click your project and choose **Manage NuGet Packages**.
-1. In **NuGet Package Manager**, select **Browse**. Then search for and choose **Microsoft.Azure.Storage.Blob**, and then select **Install**.
-
- This step installs the package and its dependencies.
-1. Search for and install these packages:
-
- - **Microsoft.Azure.Storage.Common**
- - **Microsoft.Azure.Storage.File**
- - **Microsoft.Azure.ConfigurationManager**
--- ## Save your storage account credentials to the App.config file Next, save your credentials in your project's *App.config* file. In **Solution Explorer**, double-click `App.config` and edit the file so that it is similar to the following example.
-# [Azure \.NET SDK v12](#tab/dotnet)
- Replace `myaccount` with your storage account name and `mykey` with your storage account key. :::code language="xml" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/app.config" highlight="5,6,7":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-Replace `myaccount` with your storage account name and `StorageAccountKeyEndingIn==` with your storage account key.
-
-```xml
-<?xml version="1.0" encoding="utf-8" ?>
-<configuration>
- <startup>
- <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
- </startup>
- <appSettings>
- <add key="StorageConnectionString"
- value="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=StorageAccountKeyEndingIn==" />
- </appSettings>
-</configuration>
-```
--- > [!NOTE] > The Azurite storage emulator does not currently support Azure Files. Your connection string must target an Azure storage account in the cloud to work with Azure Files.
Replace `myaccount` with your storage account name and `StorageAccountKeyEndingI
In **Solution Explorer**, open the *Program.cs* file, and add the following using directives to the top of the file.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_UsingStatements":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-using Microsoft.Azure; // Namespace for Azure Configuration Manager
-using Microsoft.Azure.Storage; // Namespace for Storage Client Library
-using Microsoft.Azure.Storage.Blob; // Namespace for Azure Blobs
-using Microsoft.Azure.Storage.File; // Namespace for Azure Files
-```
---- ## Access the file share programmatically In the *Program.cs* file, add the following code to access the file share programmatically.
-# [Azure \.NET SDK v12](#tab/dotnet)
- The following method creates a file share if it doesn't already exist. The method starts by creating a [ShareClient](/dotnet/api/azure.storage.files.shares.shareclient) object from a connection string. The sample then attempts to download a file we created earlier. Call this method from `Main()`. :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_CreateShare":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-Next, add the following content to the `Main()` method, after the code shown above, to retrieve the connection string. This code gets a reference to the file we created earlier and outputs its contents.
-
-```csharp
-// Create a CloudFileClient object for credentialed access to Azure Files.
-CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
-
-// Get a reference to the file share we created previously.
-CloudFileShare share = fileClient.GetShareReference("logs");
-
-// Ensure that the share exists.
-if (share.Exists())
-{
- // Get a reference to the root directory for the share.
- CloudFileDirectory rootDir = share.GetRootDirectoryReference();
-
- // Get a reference to the directory we created previously.
- CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
-
- // Ensure that the directory exists.
- if (sampleDir.Exists())
- {
- // Get a reference to the file we created previously.
- CloudFile file = sampleDir.GetFileReference("Log1.txt");
-
- // Ensure that the file exists.
- if (file.Exists())
- {
- // Write the contents of the file to the console window.
- Console.WriteLine(file.DownloadTextAsync().Result);
- }
- }
-}
-```
-
-Run the console application to see the output.
--- ## Set the maximum size for a file share Beginning with version 5.x of the Azure Files client library, you can set the quota (maximum size) for a file share. You can also check to see how much data is currently stored on the share.
Setting the quota for a share limits the total size of the files stored on the s
The example below shows how to check the current usage for a share and how to set the quota for the share.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_SetMaxShareSize":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-// Parse the connection string for the storage account.
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
-
-// Create a CloudFileClient object for credentialed access to Azure Files.
-CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
-
-// Get a reference to the file share we created previously.
-CloudFileShare share = fileClient.GetShareReference("logs");
-
-// Ensure that the share exists.
-if (share.Exists())
-{
- // Check current usage stats for the share.
- // Note that the ShareStats object is part of the protocol layer for the File service.
- Microsoft.Azure.Storage.File.Protocol.ShareStats stats = share.GetStats();
- Console.WriteLine("Current share usage: {0} GiB", stats.Usage.ToString());
-
- // Specify the maximum size of the share, in GiB.
- // This line sets the quota to be 10 GiB greater than the current usage of the share.
- share.Properties.Quota = 10 + stats.Usage;
- share.SetProperties();
-
- // Now check the quota for the share. Call FetchAttributes() to populate the share's properties.
- share.FetchAttributes();
- Console.WriteLine("Current share quota: {0} GiB", share.Properties.Quota);
-}
-```
--- ### Generate a shared access signature for a file or file share Beginning with version 5.x of the Azure Files client library, you can generate a shared access signature (SAS) for a file share or for an individual file.
-# [Azure \.NET SDK v12](#tab/dotnet)
- The following example method returns a SAS on a file in the specified share. :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_GetFileSasUri":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-You can also create a stored access policy on a file share to manage shared access signatures. We recommend creating a stored access policy because it lets you revoke the SAS if it becomes compromised. The following example creates a stored access policy on a share. The example uses that policy to provide the constraints for a SAS on a file in the share.
-
-```csharp
-// Parse the connection string for the storage account.
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
-
-// Create a CloudFileClient object for credentialed access to Azure Files.
-CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
-
-// Get a reference to the file share we created previously.
-CloudFileShare share = fileClient.GetShareReference("logs");
-
-// Ensure that the share exists.
-if (share.Exists())
-{
- string policyName = "sampleSharePolicy" + DateTime.UtcNow.Ticks;
-
- // Create a new stored access policy and define its constraints.
- SharedAccessFilePolicy sharedPolicy = new SharedAccessFilePolicy()
- {
- SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
- Permissions = SharedAccessFilePermissions.Read | SharedAccessFilePermissions.Write
- };
-
- // Get existing permissions for the share.
- FileSharePermissions permissions = share.GetPermissions();
-
- // Add the stored access policy to the share's policies. Note that each policy must have a unique name.
- permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
- share.SetPermissions(permissions);
-
- // Generate a SAS for a file in the share and associate this access policy with it.
- CloudFileDirectory rootDir = share.GetRootDirectoryReference();
- CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
- CloudFile file = sampleDir.GetFileReference("Log1.txt");
- string sasToken = file.GetSharedAccessSignature(null, policyName);
- Uri fileSasUri = new Uri(file.StorageUri.PrimaryUri.ToString() + sasToken);
-
- // Create a new CloudFile object from the SAS, and write some text to the file.
- CloudFile fileSas = new CloudFile(fileSasUri);
- fileSas.UploadText("This write operation is authorized via SAS.");
- Console.WriteLine(fileSas.DownloadText());
-}
-```
--- For more information about creating and using shared access signatures, see [How a shared access signature works](../common/storage-sas-overview.md?toc=/azure/storage/files/toc.json#how-a-shared-access-signature-works). ## Copy files
You can also use AzCopy to copy one file to another or to copy a blob to a file
The following example copies a file to another file in the same share. You can use [Shared Key authentication](/rest/api/storageservices/authorize-with-shared-key) to do the copy because this operation copies files within the same storage account.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_CopyFile":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-// Parse the connection string for the storage account.
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
-
-// Create a CloudFileClient object for credentialed access to Azure Files.
-CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
-
-// Get a reference to the file share we created previously.
-CloudFileShare share = fileClient.GetShareReference("logs");
-
-// Ensure that the share exists.
-if (share.Exists())
-{
- // Get a reference to the root directory for the share.
- CloudFileDirectory rootDir = share.GetRootDirectoryReference();
-
- // Get a reference to the directory we created previously.
- CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
-
- // Ensure that the directory exists.
- if (sampleDir.Exists())
- {
- // Get a reference to the file we created previously.
- CloudFile sourceFile = sampleDir.GetFileReference("Log1.txt");
-
- // Ensure that the source file exists.
- if (sourceFile.Exists())
- {
- // Get a reference to the destination file.
- CloudFile destFile = sampleDir.GetFileReference("Log1Copy.txt");
-
- // Start the copy operation.
- destFile.StartCopy(sourceFile);
-
- // Write the contents of the destination file to the console window.
- Console.WriteLine(destFile.DownloadText());
- }
- }
-}
-```
--- ### Copy a file to a blob The following example creates a file and copies it to a blob within the same storage account. The example creates a SAS for the source file, which the service uses to authorize access to the source file during the copy operation.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_CopyFileToBlob":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-// Parse the connection string for the storage account.
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
-
-// Create a CloudFileClient object for credentialed access to Azure Files.
-CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
-
-// Create a new file share, if it does not already exist.
-CloudFileShare share = fileClient.GetShareReference("sample-share");
-share.CreateIfNotExists();
-
-// Create a new file in the root directory.
-CloudFile sourceFile = share.GetRootDirectoryReference().GetFileReference("sample-file.txt");
-sourceFile.UploadText("A sample file in the root directory.");
-
-// Get a reference to the blob to which the file will be copied.
-CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
-CloudBlobContainer container = blobClient.GetContainerReference("sample-container");
-container.CreateIfNotExists();
-CloudBlockBlob destBlob = container.GetBlockBlobReference("sample-blob.txt");
-
-// Create a SAS for the file that's valid for 24 hours.
-// Note that when you are copying a file to a blob, or a blob to a file, you must use a SAS
-// to authorize access to the source object, even if you are copying within the same
-// storage account.
-string fileSas = sourceFile.GetSharedAccessSignature(new SharedAccessFilePolicy()
-{
- // Only read permissions are required for the source file.
- Permissions = SharedAccessFilePermissions.Read,
- SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24)
-});
-
-// Construct the URI to the source file, including the SAS token.
-Uri fileSasUri = new Uri(sourceFile.StorageUri.PrimaryUri.ToString() + fileSas);
-
-// Copy the file to the blob.
-destBlob.StartCopy(fileSasUri);
-
-// Write the contents of the file to the console window.
-Console.WriteLine("Source file contents: {0}", sourceFile.DownloadText());
-Console.WriteLine("Destination blob contents: {0}", destBlob.DownloadText());
-```
--- You can copy a blob to a file in the same way. If the source object is a blob, then create a SAS to authorize access to that blob during the copy operation. ## Share snapshots
Beginning with version 8.5 of the Azure Files client library, you can create a s
The following example creates a file share snapshot.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_CreateShareSnapshot":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-storageAccount = CloudStorageAccount.Parse(ConnectionString);
-fClient = storageAccount.CreateCloudFileClient();
-string baseShareName = "myazurefileshare";
-CloudFileShare myShare = fClient.GetShareReference(baseShareName);
-var snapshotShare = myShare.Snapshot();
-
-```
--- ### List share snapshots The following example lists the snapshots on a share.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_ListShareSnapshots":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-var shares = fClient.ListShares(baseShareName, ShareListingDetails.All);
-```
--- ### List files and directories within share snapshots The following example browses files and directories within share snapshots.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_ListSnapshotContents":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-CloudFileShare mySnapshot = fClient.GetShareReference(baseShareName, snapshotTime);
-var rootDirectory = mySnapshot.GetRootDirectoryReference();
-var items = rootDirectory.ListFilesAndDirectories();
-```
--- ### Restore file shares or files from share snapshots Taking a snapshot of a file share enables you to recover individual files or the entire file share. You can restore a file from a file share snapshot by querying the share snapshots of a file share. You can then retrieve a file that belongs to a particular share snapshot. Use that version to directly read or to restore the file.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_RestoreFileFromSnapshot":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-CloudFileShare liveShare = fClient.GetShareReference(baseShareName);
-var rootDirOfliveShare = liveShare.GetRootDirectoryReference();
-var dirInliveShare = rootDirOfliveShare.GetDirectoryReference(dirName);
-var fileInliveShare = dirInliveShare.GetFileReference(fileName);
-
-CloudFileShare snapshot = fClient.GetShareReference(baseShareName, snapshotTime);
-var rootDirOfSnapshot = snapshot.GetRootDirectoryReference();
-var dirInSnapshot = rootDirOfSnapshot.GetDirectoryReference(dirName);
-var fileInSnapshot = dir1InSnapshot.GetFileReference(fileName);
-
-string sasContainerToken = string.Empty;
-SharedAccessFilePolicy sasConstraints = new SharedAccessFilePolicy();
-sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24);
-sasConstraints.Permissions = SharedAccessFilePermissions.Read;
-
-//Generate the shared access signature on the container, setting the constraints directly on the signature.
-sasContainerToken = fileInSnapshot.GetSharedAccessSignature(sasConstraints);
-
-string sourceUri = (fileInSnapshot.Uri.ToString() + sasContainerToken + "&" + fileInSnapshot.SnapshotTime.ToString()); ;
-fileInliveShare.StartCopyAsync(new Uri(sourceUri));
-```
--- ### Delete share snapshots The following example deletes a file share snapshot.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_DeleteSnapshot":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-```csharp
-CloudFileShare mySnapshot = fClient.GetShareReference(baseShareName, snapshotTime); mySnapshot.Delete(null, null, null);
-```
--- ## Troubleshoot Azure Files by using metrics<a name="troubleshooting-azure-files-using-metrics"></a> Azure Storage Analytics supports metrics for Azure Files. With metrics data, you can trace requests and diagnose issues.
You can enable metrics for Azure Files from the [Azure portal](https://portal.az
The following code example shows how to use the .NET client library to enable metrics for Azure Files.
-# [Azure \.NET SDK v12](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/files/howto/dotnet/dotnet-v12/FileShare.cs" id="snippet_UseMetrics":::
-# [Azure \.NET SDK v11](#tab/dotnetv11)
-
-First, add the following `using` directives to your *Program.cs* file, along with the ones you added above:
-
-```csharp
-using Microsoft.Azure.Storage.File.Protocol;
-using Microsoft.Azure.Storage.Shared.Protocol;
-```
-
-Although Azure Blobs, Azure Tables, and Azure Queues use the shared `ServiceProperties` type in the `Microsoft.Azure.Storage.Shared.Protocol` namespace, Azure Files uses its own type, the `FileServiceProperties` type in the `Microsoft.Azure.Storage.File.Protocol` namespace. You must reference both namespaces from your code, however, for the following code to compile.
-
-```csharp
-// Parse your storage connection string from your application's configuration file.
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
-// Create the File service client.
-CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
-
-// Set metrics properties for File service.
-// Note that the File service currently uses its own service properties type,
-// available in the Microsoft.Azure.Storage.File.Protocol namespace.
-fileClient.SetServiceProperties(new FileServiceProperties()
-{
- // Set hour metrics
- HourMetrics = new MetricsProperties()
- {
- MetricsLevel = MetricsLevel.ServiceAndApi,
- RetentionDays = 14,
- Version = "1.0"
- },
- // Set minute metrics
- MinuteMetrics = new MetricsProperties()
- {
- MetricsLevel = MetricsLevel.ServiceAndApi,
- RetentionDays = 7,
- Version = "1.0"
- }
-});
-
-// Read the metrics properties we just set.
-FileServiceProperties serviceProperties = fileClient.GetServiceProperties();
-Console.WriteLine("Hour metrics:");
-Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
-Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
-Console.WriteLine(serviceProperties.HourMetrics.Version);
-Console.WriteLine();
-Console.WriteLine("Minute metrics:");
-Console.WriteLine(serviceProperties.MinuteMetrics.MetricsLevel);
-Console.WriteLine(serviceProperties.MinuteMetrics.RetentionDays);
-Console.WriteLine(serviceProperties.MinuteMetrics.Version);
-```
--- If you encounter any problems, refer to [Troubleshoot Azure Files](files-troubleshoot.md). ## Next steps
For more information about Azure Files, see the following resources:
- [Azure Storage APIs for .NET](/dotnet/api/overview/azure/storage) - [File Service REST API](/rest/api/storageservices/File-Service-REST-API)+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](files-samples-dotnet-v11.md).
storage Storage Java How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-java-how-to-use-file-storage.md
To build the samples, you'll need the Java Development Kit (JDK) and the [Azure
To use the Azure Files APIs, add the following code to the top of the Java file from where you intend to access Azure Files.
-# [Azure Java SDK v12](#tab/java)
- :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_ImportStatements":::
-# [Azure Java SDK v8](#tab/java8)
-
-```java
-// Include the following imports to use Azure Files APIs v11
-import com.microsoft.azure.storage.*;
-import com.microsoft.azure.storage.file.*;
-```
--- ## Set up an Azure storage connection string To use Azure Files, you need to connect to your Azure storage account. Configure a connection string and use it to connect to your storage account. Define a static variable to hold the connection string.
-# [Azure Java SDK v12](#tab/java)
- Replace *\<storage_account_name\>* and *\<storage_account_key\>* with the actual values for your storage account. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_ConnectionString":::
-# [Azure Java SDK v8](#tab/java8)
-
-Replace *your_storage_account_name* and *your_storage_account_key* with the actual values for your storage account.
-
-```java
-// Configure the connection-string with your values
-public static final String storageConnectionString =
- "DefaultEndpointsProtocol=http;" +
- "AccountName=your_storage_account_name;" +
- "AccountKey=your_storage_account_key";
-```
--- ## Access an Azure file share
-# [Azure Java SDK v12](#tab/java)
- To access Azure Files, create a [ShareClient](/java/api/com.azure.storage.file.share.shareclient) object. Use the [ShareClientBuilder](/java/api/com.azure.storage.file.share.shareclientbuilder) class to build a new **ShareClient** object. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_createClient":::
-# [Azure Java SDK v8](#tab/java8)
-
-To access your storage account, use the **CloudStorageAccount** object, passing the connection string to its **parse** method.
-
-```java
-// Use the CloudStorageAccount object to connect to your storage account
-try {
- CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
-} catch (InvalidKeyException invalidKey) {
- // Handle the exception
-}
-```
-
-**CloudStorageAccount.parse** throws an InvalidKeyException so you'll need to put it inside a try/catch block.
--- ## Create a file share All files and directories in Azure Files are stored in a container called a share.
-# [Azure Java SDK v12](#tab/java)
- The [ShareClient.create](/java/api/com.azure.storage.file.share.shareclient.create) method throws an exception if the share already exists. Put the call to **create** in a `try/catch` block and handle the exception. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_createFileShare":::
-# [Azure Java SDK v8](#tab/java8)
-
-To obtain access to a share and its contents, create an Azure Files client.
-
-```java
-// Create the Azure Files client.
-CloudFileClient fileClient = storageAccount.createCloudFileClient();
-```
-
-Using the Azure Files client, you can then obtain a reference to a share.
-
-```java
-// Get a reference to the file share
-CloudFileShare share = fileClient.getShareReference("sampleshare");
-```
-
-To actually create the share, use the **createIfNotExists** method of the **CloudFileShare** object.
-
-```java
-if (share.createIfNotExists()) {
- System.out.println("New share created");
-}
-```
-
-At this point, **share** holds a reference to a share named **sample share**.
--- ## Delete a file share The following sample code deletes a file share.
-# [Azure Java SDK v12](#tab/java)
- Delete a share by calling the [ShareClient.delete](/java/api/com.azure.storage.file.share.shareclient.delete) method. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_deleteFileShare":::
-# [Azure Java SDK v8](#tab/java8)
-
-Delete a share by calling the **deleteIfExists** method on a **CloudFileShare** object.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
-
- // Create the file client.
- CloudFileClient fileClient = storageAccount.createCloudFileClient();
-
- // Get a reference to the file share
- CloudFileShare share = fileClient.getShareReference("sampleshare");
-
- if (share.deleteIfExists()) {
- System.out.println("sampleshare deleted");
- }
-} catch (Exception e) {
- e.printStackTrace();
-}
-```
--- ## Create a directory Organize storage by putting files inside subdirectories instead of having all of them in the root directory.
-# [Azure Java SDK v12](#tab/java)
- The following code creates a directory by calling [ShareDirectoryClient.create](/java/api/com.azure.storage.file.share.sharedirectoryclient.create). The example method returns a `Boolean` value indicating if it successfully created the directory. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_createDirectory":::
-# [Azure Java SDK v8](#tab/java8)
-
-The following code creates a subdirectory named **sampledir** under the root directory.
-
-```java
-//Get a reference to the root directory for the share.
-CloudFileDirectory rootDir = share.getRootDirectoryReference();
-
-//Get a reference to the sampledir directory
-CloudFileDirectory sampleDir = rootDir.getDirectoryReference("sampledir");
-
-if (sampleDir.createIfNotExists()) {
- System.out.println("sampledir created");
-} else {
- System.out.println("sampledir already exists");
-}
-```
--- ## Delete a directory Deleting a directory is a straightforward task. You can't delete a directory that still contains files or subdirectories.
-# [Azure Java SDK v12](#tab/java)
- The [ShareDirectoryClient.delete](/java/api/com.azure.storage.file.share.sharedirectoryclient.delete) method throws an exception if the directory doesn't exist or isn't empty. Put the call to **delete** in a `try/catch` block and handle the exception. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_deleteDirectory":::
-# [Azure Java SDK v8](#tab/java8)
-
-```java
-// Get a reference to the root directory for the share.
-CloudFileDirectory rootDir = share.getRootDirectoryReference();
-
-// Get a reference to the directory you want to delete
-CloudFileDirectory containerDir = rootDir.getDirectoryReference("sampledir");
-
-// Delete the directory
-if ( containerDir.deleteIfExists() ) {
- System.out.println("Directory deleted");
-}
-```
--- ## Enumerate files and directories in an Azure file share
-# [Azure Java SDK v12](#tab/java)
- Get a list of files and directories by calling [ShareDirectoryClient.listFilesAndDirectories](/java/api/com.azure.storage.file.share.sharedirectoryclient.listfilesanddirectories). The method returns a list of [ShareFileItem](/java/api/com.azure.storage.file.share.models.sharefileitem) objects on which you can iterate. The following code lists files and directories inside the directory specified by the *dirName* parameter. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_enumerateFilesAndDirs":::
-# [Azure Java SDK v8](#tab/java8)
-
-Get a list of files and directories by calling **listFilesAndDirectories** on a **CloudFileDirectory** reference. The method returns a list of **ListFileItem** objects on which you can iterate. The following code lists files and directories inside the root directory.
-
-```java
-//Get a reference to the root directory for the share.
-CloudFileDirectory rootDir = share.getRootDirectoryReference();
-
-for ( ListFileItem fileItem : rootDir.listFilesAndDirectories() ) {
- System.out.println(fileItem.getUri());
-}
-```
--- ## Upload a file Learn how to upload a file from local storage.
-# [Azure Java SDK v12](#tab/java)
- The following code uploads a local file to Azure Files by calling the [ShareFileClient.uploadFromFile](/java/api/com.azure.storage.file.share.sharefileclient.uploadfromfile) method. The following example method returns a `Boolean` value indicating if it successfully uploaded the specified file. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_uploadFile":::
-# [Azure Java SDK v8](#tab/java8)
-
-Get a reference to the directory where the file will be uploaded by calling the **getRootDirectoryReference** method on the share object.
-
-```java
-//Get a reference to the root directory for the share.
-CloudFileDirectory rootDir = share.getRootDirectoryReference();
-```
-
-Now that you have a reference to the root directory of the share, you can upload a file onto it using the following code.
-
-```java
-// Define the path to a local file.
-final String filePath = "C:\\temp\\Readme.txt";
-
-CloudFile cloudFile = rootDir.getFileReference("Readme.txt");
-cloudFile.uploadFromFile(filePath);
-```
--- ## Download a file One of the more frequent operations is to download files from an Azure file share.
-# [Azure Java SDK v12](#tab/java)
- The following example downloads the specified file to the local directory specified in the *destDir* parameter. The example method makes the downloaded filename unique by prepending the date and time. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_downloadFile":::
-# [Azure Java SDK v8](#tab/java8)
-
-The following example downloads SampleFile.txt and displays its contents.
-
-```java
-//Get a reference to the root directory for the share.
-CloudFileDirectory rootDir = share.getRootDirectoryReference();
-
-//Get a reference to the directory that contains the file
-CloudFileDirectory sampleDir = rootDir.getDirectoryReference("sampledir");
-
-//Get a reference to the file you want to download
-CloudFile file = sampleDir.getFileReference("SampleFile.txt");
-
-//Write the contents of the file to the console.
-System.out.println(file.downloadText());
-```
--- ## Delete a file Another common Azure Files operation is file deletion.
-# [Azure Java SDK v12](#tab/java)
- The following code deletes the specified file specified. First, the example creates a [ShareDirectoryClient](/java/api/com.azure.storage.file.share.sharedirectoryclient) based on the *dirName* parameter. Then, the code gets a [ShareFileClient](/java/api/com.azure.storage.file.share.sharefileclient) from the directory client, based on the *fileName* parameter. Finally, the example method calls [ShareFileClient.delete](/java/api/com.azure.storage.file.share.sharefileclient.delete) to delete the file. :::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_deleteFile":::
-# [Azure Java SDK v8](#tab/java8)
-
-The following code deletes a file named SampleFile.txt stored inside a directory named **sampledir**.
-
-```java
-// Get a reference to the root directory for the share.
-CloudFileDirectory rootDir = share.getRootDirectoryReference();
-
-// Get a reference to the directory where the file to be deleted is in
-CloudFileDirectory containerDir = rootDir.getDirectoryReference("sampledir");
-
-String filename = "SampleFile.txt"
-CloudFile file;
-
-file = containerDir.getFileReference(filename)
-if ( file.deleteIfExists() ) {
- System.out.println(filename + " was deleted");
-}
-```
--- ## Next steps If you would like to learn more about other Azure storage APIs, follow these links.
If you would like to learn more about other Azure storage APIs, follow these lin
- [Azure Storage Team Blog](https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/) - [Transfer data with the AzCopy Command-Line Utility](../common/storage-use-azcopy-v10.md) - [Troubleshoot Azure Files](files-troubleshoot.md)+
+For related code samples using deprecated Java version 8 SDKs, see [Code samples using Java version 8](files-samples-java-v8.md).
storage Storage Python How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-python-how-to-use-file-storage.md
Previously updated : 10/08/2020 Last updated : 05/04/2023
from azure.storage.file import FileService
# [Azure Python SDK v12](#tab/python)
-[ShareServiceClient](/azure/developer/python/sdk/storage/azure-storage-file-share/azure.storage.fileshare.shareserviceclient) lets you work with shares, directories, and files. The following code creates a `ShareServiceClient` object using the storage account connection string.
+[ShareServiceClient](/azure/developer/python/sdk/storage/azure-storage-file-share/azure.storage.fileshare.shareserviceclient) lets you work with shares, directories, and files. This code creates a `ShareServiceClient` object using the storage account connection string:
:::code language="python" source="~/azure-storage-snippets/files/howto/python/python-v12/file_share_ops.py" id="Snippet_CreateShareServiceClient"::: # [Azure Python SDK v2](#tab/python2)
-The [FileService](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true) object lets you work with shares, directories, and files. The following code creates a `FileService` object using the storage account name and account key. Replace `<myaccount>` and `<mykey>` with your account name and key.
+The [FileService](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice) object lets you work with shares, directories, and files. The following code creates a `FileService` object using the storage account name and account key. Replace `<myaccount>` and `<mykey>` with your account name and key.
```python file_service = FileService(account_name='myaccount', account_key='mykey')
The following code example uses a [ShareClient](/azure/developer/python/sdk/stor
# [Azure Python SDK v2](#tab/python2)
-The following code example uses a [FileService](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true) object to create the share if it doesn't exist.
+The following code example uses a [FileService](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice) object to create the share if it doesn't exist.
```python file_service.create_share('myshare')
The following method creates a directory in the root of the specified file share
# [Azure Python SDK v2](#tab/python2)
-The code below will create a subdirectory named *sampledir* under the root directory.
+This code creates a subdirectory named *sampledir* under the root directory:
```python file_service.create_directory('myshare', 'sampledir')
file_service.create_directory('myshare', 'sampledir')
## Upload a file
-In this section, you'll learn how to upload a file from local storage into Azure Files.
+In this section, you learn how to upload a file from local storage into Azure Files.
# [Azure Python SDK v12](#tab/python)
The following method uploads the contents of the specified file into the specifi
# [Azure Python SDK v2](#tab/python2)
-An Azure file share contains, at the least, a root directory where files can reside. To create a file and upload data, use the [create_file_from_path](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#create-file-from-path-share-name--directory-name--file-name--local-file-path--content-settings-none--metadata-none--validate-content-false--progress-callback-none--max-connections-2--file-permission-none--smb-properties--azure-storage-file-models-smbproperties-objecttimeout-none-), [create_file_from_stream](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#create-file-from-stream-share-name--directory-name--file-name--stream--count--content-settings-none--metadata-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--file-permission-none--smb-properties--azure-storage-file-models-smbproperties-object--), [create_file_from_bytes](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#create-file-from-bytes-share-name--directory-name--file-name--file--index-0--count-none--content-settings-none--metadata-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--file-permission-none--smb-properties--azure-storage-file-models-smbproperties-object--), or [create_file_from_text](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#create-file-from-text-share-name--directory-name--file-name--text--encoding--utf-8content-settings-none--metadata-none--validate-content-false--timeout-none--file-permission-none--smb-properties--azure-storage-file-models-smbproperties-object--) methods. They're high-level methods that perform the necessary chunking when the size of the data exceeds 64 MiB.
+An Azure file share contains, at the least, a root directory where files can reside. To create a file and upload data, use the [create_file_from_path](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-path), [create_file_from_stream](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-stream), [create_file_from_bytes](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-bytes), or [create_file_from_text](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-create-file-from-text) methods. They're high-level methods that perform the necessary chunking when the size of the data exceeds 64 MiB.
`create_file_from_path` uploads the contents of a file from the specified path, and `create_file_from_stream` uploads the contents from an already opened file/stream. `create_file_from_bytes` uploads an array of bytes, and `create_file_from_text` uploads the specified text value using the specified encoding (defaults to UTF-8).
file_service.create_file_from_path(
# [Azure Python SDK v12](#tab/python)
-To list the files and directories in a subdirectory, use the [list_directories_and_files](/azure/developer/python/sdk/storage/azure-storage-file-share/azure.storage.fileshare.shareclient#list-directories-and-files-directory-name-none--name-starts-with-none--marker-none-kwargs-) method. This method returns an auto-paging iterable. The following code outputs the **name** of each file and subdirectory in the specified directory to the console.
+To list the files and directories in a subdirectory, use the [list_directories_and_files](/python/api/azure-storage-file-share/azure.storage.fileshare.ShareClient#azure-storage-fileshare-shareclient-list-directories-and-files) method. This method returns an auto-paging iterable. The following code outputs the **name** of each file and subdirectory in the specified directory to the console.
:::code language="python" source="~/azure-storage-snippets/files/howto/python/python-v12/file_share_ops.py" id="Snippet_ListFilesAndDirs"::: # [Azure Python SDK v2](#tab/python2)
-To list the files and directories in a share, use the [list_directories_and_files](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#list-directories-and-files-share-name--directory-name-none--num-results-none--marker-none--timeout-none--prefix-none--snapshot-none-) method. This method returns a generator. The following code outputs the **name** of each file and directory in a share to the console.
+To list the files and directories in a share, use the [list_directories_and_files](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-list-directories-and-files) method. This method returns a generator. The following code outputs the **name** of each file and directory in a share to the console.
```python generator = file_service.list_directories_and_files('myshare')
for file_or_dir in generator:
# [Azure Python SDK v12](#tab/python)
-To download data from a file, use [download_file](/azure/developer/python/sdk/storage/azure-storage-file-share/azure.storage.fileshare.sharefileclient#download-file-offset-none--length-none-kwargs-).
+To download data from a file, use [download_file](/python/api/azure-storage-file-share/azure.storage.fileshare.ShareFileClient#azure-storage-fileshare-sharefileclient-download-file).
The following example demonstrates using `download_file` to get the contents of the specified file and store it locally with **DOWNLOADED-** prepended to the filename.
The following example demonstrates using `download_file` to get the contents of
# [Azure Python SDK v2](#tab/python2)
-To download data from a file, use [get_file_to_path](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#get-file-to-path-share-name--directory-name--file-name--file-path--open-mode--wbstart-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--snapshot-none-), [get_file_to_stream](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#get-file-to-stream-share-name--directory-name--file-name--stream--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--snapshot-none-), [get_file_to_bytes](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#get-file-to-bytes-share-name--directory-name--file-name--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--snapshot-none-), or [get_file_to_text](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#get-file-to-text-share-name--directory-name--file-name--encoding--utf-8start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--snapshot-none-). They're high-level methods that perform the necessary chunking when the size of the data exceeds 64 MiB.
+To download data from a file, use [get_file_to_path](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-get-file-to-path), [get_file_to_stream](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#get-file-to-stream-share-name--directory-name--file-name--stream--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--timeout-none--snapshot-none-), [get_file_to_bytes](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-get-file-to-bytes), or [get_file_to_text](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-get-file-to-text). They're high-level methods that perform the necessary chunking when the size of the data exceeds 64 MiB.
The following example demonstrates using `get_file_to_path` to download the contents of the **myfile** file and store it to the *out-sunset.png* file.
directories_and_files = list(
## Get file from share snapshot
-You can download a file from a share snapshot. This enables you to restore a previous version of a file.
+You can download a file from a share snapshot, which enables you to restore a previous version of a file.
# [Azure Python SDK v12](#tab/python)
file_service.delete_share(share_name, snapshot=snapshot_id)
# [Azure Python SDK v12](#tab/python)
-To delete a file, call [delete_file](/azure/developer/python/sdk/storage/azure-storage-file-share/azure.storage.fileshare.sharefileclient#delete-filekwargs-).
+To delete a file, call [delete_file](/python/api/azure-storage-file-share/azure.storage.fileshare.ShareFileClient#azure-storage-fileshare-sharefileclient-delete-file).
:::code language="python" source="~/azure-storage-snippets/files/howto/python/python-v12/file_share_ops.py" id="Snippet_DeleteFile"::: # [Azure Python SDK v2](#tab/python2)
-To delete a file, call [delete_file](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python-previous&preserve-view=true#delete-file-share-name--directory-name--file-name--timeout-none-).
+To delete a file, call [delete_file](/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice#azure-storage-file-fileservice-fileservice-delete-file).
```python file_service.delete_file('myshare', None, 'myfile')
file_service.delete_file('myshare', None, 'myfile')
# [Azure Python SDK v12](#tab/python)
-To delete a share that contains snapshots, call [delete_share](/azure/developer/python/sdk/storage/azure-storage-file-share/azure.storage.fileshare.shareclient#delete-share-delete-snapshots-false-kwargs-) with `delete_snapshots=True`.
+To delete a share that contains snapshots, call [delete_share](/python/api/azure-storage-file-share/azure.storage.fileshare.ShareClient#azure-storage-fileshare-shareclient-delete-share) with `delete_snapshots=True`.
:::code language="python" source="~/azure-storage-snippets/files/howto/python/python-v12/file_share_ops.py" id="Snippet_DeleteShare"::: # [Azure Python SDK v2](#tab/python2)
-A share that contains snapshots cannot be deleted unless all the snapshots are deleted first.
+A share that contains snapshots can't be deleted unless all the snapshots are deleted first.
```python file_service.delete_share(share_name, delete_snapshots=DeleteSnapshot.Include)
Now that you've learned how to manipulate Azure Files with Python, follow these
- [Python Developer Center](/azure/developer/python/) - [Azure Storage Services REST API](/rest/api/azure/) - [Microsoft Azure Storage SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage)+
+For related code samples using deprecated Python version 2 SDKs, see [Code samples using Python version 2](files-samples-python-v2.md).
storage Passwordless Migrate Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/passwordless-migrate-queues.md
+
+ Title: Migrate applications to use passwordless authentication with Azure Queue Storage
+
+description: Learn to migrate existing applications away from Shared Key authorization with the account key to instead use Azure AD and Azure RBAC for enhanced security with Azure Storage Queues.
++ Last updated : 05/03/2023++++++
+# Migrate an application to use passwordless connections with Azure Queue Storage
++
+## Configure your local development environment
+
+Passwordless connections can be configured to work for both local and Azure-hosted environments. In this section, you'll apply configurations to allow individual users to authenticate to Azure Queue Storage for local development.
+
+### Assign user roles
++
+### Sign-in to Azure locally
++
+### Update the application code to use passwordless connections
+
+The Azure Identity client library, for each of the following ecosystems, provides a `DefaultAzureCredential` class that handles passwordless authentication to Azure:
+
+- [.NET](/dotnet/api/overview/azure/Identity-readme?view=azure-dotnet&preserve-view=true#defaultazurecredential)
+- [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#readme-defaultazurecredential)
+- [Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#defaultazurecredential)
+- [Node.js](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true#defaultazurecredential)
+- [Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true#defaultazurecredential)
+
+`DefaultAzureCredential` supports multiple authentication methods. The method to use is determined at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. See the links above for the order and locations in which `DefaultAzureCredential` looks for credentials.
+
+## [.NET](#tab/dotnet)
+
+1. To use `DefaultAzureCredential` in a .NET application, install the `Azure.Identity` package:
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```csharp
+ using Azure.Identity;
+ ```
+
+1. Identify the locations in your code that create a `QueueClient` to connect to Azure Storage. Update your code to match the following example:
+
+ ```csharp
+ var credential = new DefaultAzureCredential();
+
+ // TODO: Update the <storage-account-name> and <queue-name> placeholders.
+ var queueClient = new QueueClient(
+ new Uri($"https://<storage-account-name>.queue.core.windows.net/<queue-name>"),
+ new DefaultAzureCredential());
+ ```
+++
+4. Make sure to update the storage account name in the URI of your `QueueClient` object. You can find the storage account name on the overview page of the Azure portal.
+
+ :::image type="content" source="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="Screenshot showing how to find the storage account name." lightbox="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png":::
+
+### Run the app locally
+
+After making these code changes, run your application locally. The new configuration should pick up your local credentials, such as the Azure CLI, Visual Studio, or IntelliJ. The roles you assigned to your user in Azure allows your app to connect to the Azure service locally.
+
+## Configure the Azure hosting environment
+
+Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it's deployed to Azure. The sections that follow explain how to configure a deployed application to connect to Azure Queue Storage using a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview). Managed identities provide an automatically managed identity in Azure Active Directory (Azure AD) for applications to use when connecting to resources that support Azure AD authentication. Learn more about managed identities:
+
+* [Passwordless Overview](/azure/developer/intro/passwordless-overview)
+* [Managed identity best practices](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations)
+
+### Create the managed identity
++
+#### Associate the managed identity with your web app
+
+You need to configure your web app to use the managed identity you created. Assign the identity to your app using either the Azure portal or the Azure CLI.
+
+# [Azure portal](#tab/azure-portal-associate)
+
+Complete the following steps in the Azure portal to associate an identity with your app. These same steps apply to the following Azure
+
+* Azure Spring Apps
+* Azure Container Apps
+* Azure virtual machines
+* Azure Kubernetes Service
+
+1. Navigate to the overview page of your web app.
+1. Select **Identity** from the left navigation.
+1. On the **Identity** page, switch to the **User assigned** tab.
+1. Select **+ Add** to open the **Add user assigned managed identity** flyout.
+1. Select the subscription you used previously to create the identity.
+1. Search for the **MigrationIdentity** by name and select it from the search results.
+1. Select **Add** to associate the identity with your app.
+
+ :::image type="content" source="../../../articles/storage/common/media/create-user-assigned-identity-small.png" alt-text="Screenshot showing how to create a user assigned identity." lightbox="../../../articles/storage/common/media/create-user-assigned-identity.png":::
+
+# [Azure CLI](#tab/azure-cli-associate)
++
+# [Service Connector](#tab/service-connector-associate)
++++
+### Assign roles to the managed identity
+
+Next, you need to grant permissions to the managed identity you created to access your storage account. Grant permissions by assigning a role to the managed identity, just like you did with your local development user.
+
+### [Azure portal](#tab/assign-role-azure-portal)
+
+1. Navigate to your storage account overview page and select **Access Control (IAM)** from the left navigation.
+
+1. Choose **Add role assignment**
+
+ :::image type="content" source="../../../includes/passwordless/media/migration-add-role-small.png" alt-text="Screenshot showing how to add a role to a managed identity." lightbox="../../../includes/passwordless/media/migration-add-role.png" :::
+
+1. In the **Role** search box, search for *Storage Queue Data Contributor*, which is a common role used to manage data operations for queues. You can assign whatever role is appropriate for your use case. Select the *Storage Queue Data Contributor* from the list and choose **Next**.
+
+1. On the **Add role assignment** screen, for the **Assign access to** option, select **Managed identity**. Then choose **+Select members**.
+
+1. In the flyout, search for the managed identity you created by name and select it from the results. Choose **Select** to close the flyout menu.
+
+ :::image type="content" source="../../../includes/passwordless/media/migration-select-identity-small.png" alt-text="Screenshot showing how to select the assigned managed identity." lightbox="../../../includes/passwordless/media/migration-select-identity.png":::
+
+1. Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment.
+
+### [Azure CLI](#tab/assign-role-azure-cli)
+
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the [az storage account](/cli/azure/storage/account) show command. You can filter the output properties using the `--query` parameter.
+
+```azurecli
+az storage account show \
+ --resource-group '<your-resource-group-name>' \
+ --name '<your-storage-account-name>' \
+ --query id
+```
+
+Copy the output ID from the preceding command. You can then assign roles using the [az role assignment](/cli/azure/role/assignment) command of the Azure CLI.
+
+```azurecli
+az role assignment create \
+ --assignee "<your-username>" \
+ --role "Storage Queue Data Contributor" \
+ --scope "<your-resource-id>"
+```
+
+### [Service Connector](#tab/assign-role-service-connector)
+
+If you connected your services using Service Connector you don't need to complete this step. The necessary role configurations were handled for you when you ran the Service Connector CLI commands.
+++
+### Update the application code
+
+You need to configure your application code to look for the specific managed identity you created when it's deployed to Azure. In some scenarios, explicitly setting the managed identity for the app also prevents other environment identities from accidentally being detected and used automatically.
+
+1. On the managed identity overview page, copy the client ID value to your clipboard.
+1. Update the `DefaultAzureCredential` object to specify this managed identity client ID:
+
+ ## [.NET](#tab/dotnet)
+
+ ```csharp
+ // TODO: Update the <managed-identity-client-id> placeholder.
+ var credential = new DefaultAzureCredential(
+ new DefaultAzureCredentialOptions
+ {
+ ManagedIdentityClientId = "<managed-identity-client-id>"
+ });
+ ```
+
+
+
+1. Redeploy your code to Azure after making this change in order for the configuration updates to be applied.
+
+### Test the app
+
+After deploying the updated code, browse to your hosted application in the browser. Your app should be able to connect to the storage account successfully. Keep in mind that it may take several minutes for the role assignments to propagate through your Azure environment. Your application is now configured to run both locally and in a production environment without the developers having to manage secrets in the application itself.
+
+## Next steps
+
+In this tutorial, you learned how to migrate an application to passwordless connections.
+
+You can read the following resources to explore the concepts discussed in this article in more depth:
+
+* [Authorize access to blobs using Azure Active Directory](../blobs/authorize-access-azure-active-directory.md)
+* To learn more about .NET, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
storage Table Storage Design Modeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-modeling.md
Building domain models is a key step in the design of complex systems. Typically
## One-to-many relationships One-to-many relationships between business domain objects occur frequently: for example, one department has many employees. There are several ways to implement one-to-many relationships in the Table service each with pros and cons that may be relevant to the particular scenario.
-Consider the example of a large multi-national corporation with tens of thousands of departments and employee entities where every department has many employees and each employee as associated with one specific department. One approach is to store separate department and employee entities such as these:
+Consider the example of a large multi-national/regional corporation with tens of thousands of departments and employee entities where every department has many employees and each employee as associated with one specific department. One approach is to store separate department and employee entities such as these:
![Store separate department and employee entities](media/storage-table-design-guide/storage-table-design-IMAGE01.png)
storsimple Storsimple 8000 Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-safety.md
To reduce the likelihood of injury, electrical shock, or death:
* When powered by multiple AC sources, disconnect all supply power for complete isolation. * Permanently unplug the unit before you move it or if you think it has become damaged in any way.
-* Provide a safe electrical earth connection to the power supply cords. Verify that the grounding of the enclosure meets the national and local requirements before applying power.
+* Provide a safe electrical earth connection to the power supply cords. Verify that the grounding of the enclosure meets the national/regional and local requirements before applying power.
* Ensure that the power connection is always disconnected prior to the removal of a PCM from the enclosure. * Given that the plug on the power supply cord is the main disconnect device, ensure that the socket outlets are located near the equipment and are easily accessible.
synapse-analytics Apache Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-history-server.md
Apache Spark history server is the web user interface for completed and running
2. Select **Monitor**, then select **Apache Spark Applications**.
- ![Select monitor then select spark application.](./media/apache-spark-history-server/click-monitor-spark-application.png)
+ ![Screenshot showing select monitor then select Spark application.](./media/apache-spark-history-server/click-monitor-spark-application.png)
3. Select an application, then open **Log query** by selecting it.
- ![Open log query window.](./media/apache-spark-history-server/open-application-window.png)
+ ![Screenshot showing open log query window.](./media/apache-spark-history-server/open-application-window.png)
-4. Select **Spark history server**, then the Spark History Server web UI will show up.
+4. Select **Spark history server**, then the Spark History Server web UI will show up. For a running Spark application, the button is **Spark UI**.
- ![Open spark history server.](./media/apache-spark-history-server/open-spark-history-server.png)
+ ![Screenshot showing open Spark history server.](./media/apache-spark-history-server/open-spark-history-server.png)
+ ![Screenshot showing open Spark UI.](./media/apache-spark-history-server/apache-spark-ui.png)
### Open the Spark History Server web UI from Data node 1. From your Synapse Studio notebook, select **Spark history server** from the job execution output cell or from the status panel at the bottom of the notebook document. Select **Session details**.
- ![Launch Spark history server 1](./media/apache-spark-history-server/launch-history-server2.png "Launch Spark history server")
+ ![Screenshot showing launch Spark history server 1.](./media/apache-spark-history-server/launch-history-server2.png "Launch Spark history server")
2. Select **Spark history server** from the slide out panel.
- ![Launch Spark history server 2](./media/apache-spark-history-server/launch-history-server.png "Launch Spark history server")
+ ![Screenshot showing launch Spark history server 2.](./media/apache-spark-history-server/launch-history-server.png "Launch Spark history server")
-## Explore the Data tab in Spark history server
-
-Select the Job ID for the job you want to view. Then select **Data** on the tool menu to get the data view. This section shows you how to do various tasks in the Data tab.
-
-* Check the **Inputs**, **Outputs**, and **Table Operations** by selecting the tabs separately.
-
- ![Data for Spark application tabs](./media/apache-spark-history-server/apache-spark-data-tabs.png)
-
-* Copy all rows by selecting **Copy**.
-
- ![Data for Spark application copy](./media/apache-spark-history-server/apache-spark-data-copy.png)
-
-* Save all data as CSV file by selecting **csv**.
-
- ![Data for Spark application save](./media/apache-spark-history-server/apache-spark-data-save.png)
-
-* Search by entering keywords in field **Search**. The search results display immediately.
-
- ![Data for Spark application search](./media/apache-spark-history-server/apache-spark-data-search.png)
-
-* Select the column header to sort table, select the plus sign to expand a row to show more details, or select the minus sign to collapse a row.
-
- ![Data for Spark application table](./media/apache-spark-history-server/apache-spark-data-table.png)
-
-* Download a single file by selecting **Partial Download**. The selected file is downloaded to local. If the file no longer exists, a new tab appears with an error message.
-
- ![Data for Spark application download row](./media/apache-spark-history-server/sparkui-data-download-row.png)
-
-* To copy a full path or relative path, select the **Copy Full Path** or **Copy Relative Path** options that expand from the drop-down menu. For Azure Data Lake Storage files, **Open in Azure Storage Explorer** launches Azure Storage Explorer and locates the folder when you are signed in.
-
- ![Data for Spark application copy path](./media/apache-spark-history-server/sparkui-data-copy-path.png)
-
-* Select page numbers below the table to navigate pages when there are too many rows to display in one page.
-
- ![Data for Spark application page](./media/apache-spark-history-server/apache-spark-data-page.png)
-
-* Hover on the question mark beside **Data** to show the tooltip, or select the question mark to get more information.
-
- ![Data for Spark application more info](./media/apache-spark-history-server/sparkui-data-more-info.png)
-
-* Send feedback with issues by selecting **Provide us feedback**.
-
- ![Spark graph provide us feedback again](./media/apache-spark-history-server/sparkui-graph-feedback.png)
## Graph tab in Apache Spark history server
Select the Job ID for the job you want to view. Then, select **Graph** on the to
You can see an overview of your job in the generated job graph. By default, the graph shows all jobs. You can filter this view by **Job ID**.
-![Spark application and job graph job ID](./media/apache-spark-history-server/apache-spark-graph-jobid.png)
+![Screenshot showing Spark application and job graph job ID.](./media/apache-spark-history-server/apache-spark-graph-jobid.png)
### Display By default, the **Progress** display is selected. You can check the data flow by selecting **Read** or **Written** in the **Display** dropdown list.
-![Spark application and job graph display](./media/apache-spark-history-server/sparkui-graph-display.png)
+![Screenshot showing Spark application and job graph display.](./media/apache-spark-history-server/sparkui-graph-display.png)
The graph node displays the colors shown in the heatmap legend.
-![Spark application and job graph heatmap](./media/apache-spark-history-server/sparkui-graph-heatmap.png)
+![Screenshot showing Spark application and job graph heatmap.](./media/apache-spark-history-server/sparkui-graph-heatmap.png)
### Playback
To playback the job, select **Playback**. You can select **Stop** at any time to
The following image shows Green, Orange, and Blue status colors.
-![Spark application and job graph color sample, running](./media/apache-spark-history-server/sparkui-graph-color-running.png)
+![Screenshot showing Spark application and job graph color sample, running.](./media/apache-spark-history-server/sparkui-graph-color-running.png)
The following image shows Green and White status colors.
-![Spark application and job graph color sample, skip](./media/apache-spark-history-server/sparkui-graph-color-skip.png)
+![Screenshot showing Spark application and job graph color sample, skip.](./media/apache-spark-history-server/sparkui-graph-color-skip.png)
The following image shows Red and Green status colors.
-![Spark application and job graph color sample, failed](./media/apache-spark-history-server/sparkui-graph-color-failed.png)
+![Screenshot showing Spark application and job graph color sample, failed.](./media/apache-spark-history-server/sparkui-graph-color-failed.png)
> [!NOTE] > Playback for each job is allowed. For incomplete jobs, playback is not supported.
The following image shows Red and Green status colors.
Use your mouse scroll to zoom in and out on the job graph, or select **Zoom to fit** to make it fit to screen.
-![Spark application and job graph zoom to fit](./media/apache-spark-history-server/sparkui-graph-zoom2fit.png)
+![Screenshot showing Spark application and job graph zoom to fit.](./media/apache-spark-history-server/sparkui-graph-zoom2fit.png)
### Tooltips Hover on graph node to see the tooltip when there are failed tasks, and select a stage to open its stage page.
-![Spark application and job graph tooltip](./media/apache-spark-history-server/sparkui-graph-tooltip.png)
+![Screenshot showing Spark application and job graph tooltip.](./media/apache-spark-history-server/sparkui-graph-tooltip.png)
On the job graph tab, stages have a tooltip and a small icon displayed if they have tasks that meet the following conditions:
On the job graph tab, stages have a tooltip and a small icon displayed if they h
|Data skew|data read size > average data read size of all tasks inside this stage * 2 and data read size > 10 MB| |Time skew|execution time > average execution time of all tasks inside this stage * 2 and execution time > 2 minutes|
-![Spark application and job graph skew icon](./media/apache-spark-history-server/sparkui-graph-skew-icon.png)
+![Screenshot showing Spark application and job graph skew icon.](./media/apache-spark-history-server/sparkui-graph-skew-icon.png)
### Graph node description
The job graph node displays the following information of each stage:
Send feedback with issues by selecting **Provide us feedback**.
-![Spark application and job graph feedback](./media/apache-spark-history-server/sparkui-graph-feedback.png)
+![Screenshot showing Spark application and job graph feedback.](./media/apache-spark-history-server/sparkui-graph-feedback.png)
## Explore the Diagnosis tab in Apache Spark history server
To access the Diagnosis tab, select a job ID. Then select **Diagnosis** on the t
Check the **Data Skew**, **Time Skew**, and **Executor Usage Analysis** by selecting the tabs respectively.
-![SparkUI diagnosis data skew tab again](./media/apache-spark-history-server/sparkui-diagnosis-tabs.png)
+![Screenshot showing Spark UI diagnosis data skew tab again.](./media/apache-spark-history-server/sparkui-diagnosis-tabs.png)
### Data Skew
When you select the **Data Skew** tab, the corresponding skewed tasks are displa
* **Skewed Stage** - The second section displays stages, which have skewed tasks meeting the criteria specified above. If there is more than one skewed task in a stage, the skewed stage table only displays the most skewed task (for example, the largest data for data skew).
- ![sparkui diagnosis data skew tab](./media/apache-spark-history-server/sparkui-diagnosis-dataskew-section2.png)
+ ![Screenshot showing Spark UI diagnosis data skew tab.](./media/apache-spark-history-server/sparkui-diagnosis-dataskew-section2.png)
* **Skew Chart** ΓÇô When a row in the skew stage table is selected, the skew chart displays more task distribution details based on data read and execution time. The skewed tasks are marked in red and the normal tasks are marked in blue. The chart displays up to 100 sample tasks, and the task details are displayed in right bottom panel.
- ![sparkui skew chart for stage 10](./media/apache-spark-history-server/sparkui-diagnosis-dataskew-section3.png)
+ ![Screenshot showing Spark UI skew chart for stage 10.](./media/apache-spark-history-server/sparkui-diagnosis-dataskew-section3.png)
### Time Skew
The **Time Skew** tab displays skewed tasks based on task execution time.
* Select **Time Skew**, then filtered result is displayed in **Skewed Stage** section according to the parameters set in section **Specify Parameters**. Select one item in **Skewed Stage** section, then the corresponding chart is drafted in section3, and the task details are displayed in right bottom panel.
- ![sparkui diagnosis time skew section](./media/apache-spark-history-server/sparkui-diagnosis-timeskew-section2.png)
+ ![Screenshot showing Spark UI diagnosis time skew section.](./media/apache-spark-history-server/sparkui-diagnosis-timeskew-section2.png)
### Executor Usage Analysis
The Executor Usage Graph visualizes the Spark job executor's allocation and runn
1. Select **Executor Usage Analysis**, then four types curves about executor usage are drafted, including **Allocated Executors**, **Running Executors**, **Idle Executors**, and **Max Executor Instances**. For allocated executors, each "Executor added" or "Executor removed" event increases or decreases the allocated executors. You can check "Event Timeline" in the "Jobs" tab for more comparison.
- ![sparkui diagnosis executors tab](./media/apache-spark-history-server/sparkui-diagnosis-executors.png)
+ ![Screenshot showing Spark UI diagnosis executors tab.](./media/apache-spark-history-server/sparkui-diagnosis-executors.png)
2. Select the color icon to select or unselect the corresponding content in all drafts.
- ![sparkui diagnoses select chart](./media/apache-spark-history-server/sparkui-diagnosis-select-chart.png)
+ ![Screenshot showing Spark UI diagnoses select chart.](./media/apache-spark-history-server/sparkui-diagnosis-select-chart.png)
+
+### Troubleshooting guide for 404 in Spark UI
+
+In some cases, for long-running Spark applications with massive jobs and stages, when opening Spark UI, it may fail with the following page show up:
+
+![Screenshot showing the troubleshooting guide for 404 in Spark UI.](./media/apache-spark-history-server/404-in-spark-ui.png)
+
+As a workaround, an extra Spark configuration can be applied to the Spark pool:
+```
+spark.synapse.history.rpc.memoryNeeded 1g
+```
+
+![Screenshot showing add Spark configuration.](./media/apache-spark-history-server/add-spark-configuration.png)
+
+For existing running Spark applications, in Spark UI page, add this query string at the end of the browserΓÇÖs address bar: **?feature.enableStandaloneHS=false**
+
+![Screenshot showing add this query string at the end of the browserΓÇÖs address bar.](./media/apache-spark-history-server/spark-server-enable.png)
## Known issues
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Use multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection on Azure Virtual Desktop. Previously updated : 04/07/2023 Last updated : 05/05/2023
This article will show you how to use multimedia redirection (MMR) for Azure Virtual Desktop with Microsoft Edge or Google Chrome browsers. For more information about how multimedia redirection works, see [Understanding multimedia redirection for Azure Virtual Desktop](multimedia-redirection-intro.md).
-> [!NOTE]
->Multimedia redirection on Azure Virtual Desktop is only available for the Windows Desktop client on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. Multimedia redirection requires the [Windows Desktop client, version 1.2.3916 or later](users/connect-windows.md) with Insider releases enabled. For more information, see [Prerequisites](#prerequisites).
- ## Prerequisites Before you can use multimedia redirection on Azure Virtual Desktop, you'll need the following things:
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
description: Learn about new and updated articles to the Azure Virtual Desktop d
Previously updated : 04/18/2023 Last updated : 05/05/2023 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop on a regular basis. In this article we highlight articles for new features and where there have been important updates to existing articles.
+## April 2023
+
+In April 2023, we published the following changes:
+
+- New articles for the Azure Virtual Desktop Store app public preview:
+ - [Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md).
+ - [Use features of the Azure Virtual Desktop Store app for Windows](users/client-features-windows-azure-virtual-desktop-app.md).
+ - [What's new in the Azure Virtual Desktop Store app for Windows](whats-new-client-windows-azure-virtual-desktop-app.md).
+- Provided guidance on how to [Install the Remote Desktop client for Windows on a per-user basis](install-client-per-user.md) when using Intune or Configuration Manager.
+- Documented [MSIXMGR tool parameters](msixmgr-tool-syntax-description.md).
+- A new article to learn [What's new in the MSIXMGR tool](whats-new-msixmgr.md).
+ ## March 2023 In March 2023, we published the following changes: - A new article for the public preview of [Uniform Resource Identifier (URI) schemes with the Remote Desktop client](uri-scheme.md).-- An update showing you how to [give session hosts in a personal host pool a friendly name](configure-host-pool-personal-desktop-assignment-type.md#give-session-hosts-in-a-personal-host-pool-a-friendly-name).
+- An update showing you how to [Give session hosts in a personal host pool a friendly name](configure-host-pool-personal-desktop-assignment-type.md#give-session-hosts-in-a-personal-host-pool-a-friendly-name).
## February 2023 In February 2023, we published the following changes: - Updated [RDP Shortpath](rdp-shortpath.md?tabs=public-networks) and [Configure RDP Shortpath](configure-rdp-shortpath.md?tabs=public-networks) articles with the public preview information for an indirect UDP connection using the Traversal Using Relay NAT (TURN) protocol with a relay between a client and session host.-- Reorganized the table of contents
+- Reorganized the table of contents.
- Published the following articles for deploying Azure Virtual Desktop:
- - [Tutorial to create and connect to a Windows 11 desktop with Azure Virtual Desktop](tutorial-create-connect-personal-desktop.md)
- - [Create a host pool](create-host-pool.md)
- - [Create an application group, a workspace, and assign users](create-application-group-workspace.md)
- - [Add session hosts to a host pool](add-session-hosts-host-pool.md)
+ - [Tutorial to create and connect to a Windows 11 desktop with Azure Virtual Desktop](tutorial-create-connect-personal-desktop.md).
+ - [Create a host pool](create-host-pool.md).
+ - [Create an application group, a workspace, and assign users](create-application-group-workspace.md).
+ - [Add session hosts to a host pool](add-session-hosts-host-pool.md).
## January 2023
In January 2023, we published the following change:
## Next steps -- Learn [what's new for Azure Virtual Desktop](whats-new.md).
+ Learn [What's new for Azure Virtual Desktop](whats-new.md).
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption.md
Title: Server-side encryption of Azure managed disks description: Azure Storage protects your data by encrypting it at rest before persisting it to Storage clusters. You can use customer-managed keys to manage encryption with your own keys, or you can rely on Microsoft-managed keys for the encryption of your managed disks. Previously updated : 03/23/2023 Last updated : 05/03/2023
Customer-managed keys are available in all regions that managed disks are availa
> [!IMPORTANT] > Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD). When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another, the managed identity associated with managed disks isn't transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Azure AD directories](../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-To enable customer-managed keys for managed disks, see our articles covering how to enable it with either the [Azure PowerShell module](windows/disks-enable-customer-managed-keys-powershell.md), the [Azure CLI](linux/disks-enable-customer-managed-keys-cli.md) or the [Azure portal](disks-enable-customer-managed-keys-portal.md).
+To enable customer-managed keys for managed disks, see our articles covering how to enable it with either the [Azure PowerShell module](windows/disks-enable-customer-managed-keys-powershell.md), the [Azure CLI](linux/disks-enable-customer-managed-keys-cli.md) or the [Azure portal](disks-enable-customer-managed-keys-portal.md).
+
+See [Create a managed disk from a snapshot with CLI](scripts/create-managed-disk-from-snapshot.md#disks-with-customer-managed-keys) for a code sample.
## Encryption at host - End-to-end encryption for your VM data
virtual-machines Disks Deploy Zrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-zrs.md
Title: Deploy a ZRS managed disk
description: Learn how to deploy a managed disk that uses zone-redundant storage (ZRS). Previously updated : 12/14/2022 Last updated : 05/05/2023
For conceptual information on ZRS, see [Zone-redundant storage for managed disks
[!INCLUDE [disk-storage-zrs-limitations](../../includes/disk-storage-zrs-limitations.md)]
+## Regional availability
++ # [Azure portal](#tab/portal) ### Create a VM with a ZRS OS disk
virtual-machines Disks Enable Customer Managed Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-customer-managed-keys-portal.md
The VM deployment process is similar to the standard deployment process, the onl
- [What is Azure Key Vault?](../key-vault/general/overview.md) - [Replicate machines with customer-managed keys enabled disks](../site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md) - [Set up disaster recovery of VMware VMs to Azure with PowerShell](../site-recovery/vmware-azure-disaster-recovery-powershell.md#replicate-vmware-vms)-- [Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager](../site-recovery/hyper-v-azure-powershell-resource-manager.md#step-7-enable-vm-protection)
+- [Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager](../site-recovery/hyper-v-azure-powershell-resource-manager.md#step-7-enable-vm-protection)
+- See [Create a managed disk from a snapshot with CLI](scripts/create-managed-disk-from-snapshot.md#disks-with-customer-managed-keys) for a code sample.
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-redundancy.md
Title: Redundancy options for Azure managed disks
-description: Learn about zone-redundant storage and locally-redundant storage for Azure managed disks.
+description: Learn about zone-redundant storage and locally redundant storage for Azure managed disks.
Previously updated : 10/19/2022 Last updated : 05/05/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Azure managed disks offer two storage redundancy options, zone-redundant storage (ZRS), and locally-redundant storage. ZRS provides higher availability for managed disks than locally-redundant storage (LRS) does. However, the write latency for LRS disks is better than ZRS disks because LRS disks synchronously write data to three copies in a single data center.
+Azure managed disks offer two storage redundancy options, zone-redundant storage (ZRS), and locally redundant storage. ZRS provides higher availability for managed disks than locally redundant storage (LRS) does. However, the write latency for LRS disks is better than ZRS disks because LRS disks synchronously write data to three copies in a single data center.
-## Locally-redundant storage for managed disks
+## Locally redundant storage for managed disks
-Locally-redundant storage (LRS) replicates your data three times within a single data center in the selected region. LRS protects your data against server rack and drive failures. LRS disks provide at least 99.999999999% (11 9's) of durability over a given year. To protect an LRS disk from a zonal failure like a natural disaster or other issues, take the following steps:
+Locally redundant storage (LRS) replicates your data three times within a single data center in the selected region. LRS protects your data against server rack and drive failures. LRS disks provide at least 99.999999999% (11 9's) of durability over a given year. To protect an LRS disk from a zonal failure like a natural disaster or other issues, take the following steps:
- Use applications that can synchronously write data to two zones, and automatically failover to another zone during a disaster. - An example would be SQL Server Always On.
A ZRS disk lets you recover from failures in availability zones. If a zone went
For more information on ZRS disks, see [Zone Redundant Storage (ZRS) option for Azure Disks for high availability](https://youtu.be/RSHmhmdHXcY).
+### Limitations
++
+### Regional availability
++ ### Billing implications For details see the [Azure pricing page](https://azure.microsoft.com/pricing/details/managed-disks/).
For details see the [Azure pricing page](https://azure.microsoft.com/pricing/det
Except for more write latency, disks using ZRS are identical to disks using LRS, they have the same scale targets. [Benchmark your disks](disks-benchmarks.md) to simulate the workload of your application and compare the latency between LRS and ZRS disks.
-### Limitations
-- ## Next steps - To learn how to create a ZRS disk, see [Deploy a ZRS managed disk](disks-deploy-zrs.md).
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
| Standard_E32bs_v5 | 32 | 256 | 32 | 88000/2500 | 120000/4000 |117920/2500 |160000/4000 | 8 | 16000 | | Standard_E48bs_v5 | 48 | 384 | 32 | 132000/4000 | 150000/5000 | 160000/4000| 160000/4000| 8 | 16000 | | Standard_E64bs_v5 | 64 | 512 | 32 | 176000/5000 | 200000/5000 | 160000/4000|160000/4000 | 8 | 20000 |
-| Standard_E96bs_v5 | 96 | 672 | 32 | 260000/75000 | 260000/8000 | 260000/6500|260000/6500 | 8 | 25000 |
+| Standard_E96bs_v5 | 96 | 672 | 32 | 260000/7500 | 260000/8000 | 260000/6500|260000/6500 | 8 | 25000 |
| Standard_E112ibs_v5 | 112| 672 | 64 | 260000/8000 | 260000/8000 | 260000/6500|260000/6500 | 8 | 40000 | [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Disks Enable Customer Managed Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-customer-managed-keys-cli.md
Title: Azure CLI - Enable customer-managed keys with SSE - managed disks description: Enable customer-managed keys on your managed disks with the Azure CLI. Previously updated : 02/22/2023 Last updated : 05/03/2023
az disk-encryption-set update -n keyrotationdes -g keyrotationtesting --key-url
- [Replicate machines with customer-managed keys enabled disks](../../site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md) - [Set up disaster recovery of VMware VMs to Azure with PowerShell](../../site-recovery/vmware-azure-disaster-recovery-powershell.md#replicate-vmware-vms) - [Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager](../../site-recovery/hyper-v-azure-powershell-resource-manager.md#step-7-enable-vm-protection)
+- See [Create a managed disk from a snapshot with CLI](../scripts/create-managed-disk-from-snapshot.md#disks-with-customer-managed-keys) for a code sample.
virtual-machines Configure Oracle Dataguard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-dataguard.md
Previously updated : 08/02/2018 Last updated : 03/23/2023 - # Implement Oracle Data Guard on an Azure Linux virtual machine **Applies to:** :heavy_check_mark: Linux VMs
-Azure CLI is used to create and manage Azure resources from the command line or in scripts. This article describes how to use Azure CLI to deploy an Oracle Database 12c database from the Azure Marketplace image. This article then shows you, step by step, how to install and configure Data Guard on an Azure virtual machine (VM).
+Azure CLI is used to create and manage Azure resources from the command line or in scripts. This article describes how to use Azure CLI to deploy an Oracle Database 19c Release 3 database from the Azure Marketplace image. This article then shows you, step by step, how to install and configure Data Guard on an Azure virtual machine (VM). To secure the environment, no ports will be publicly accessible and a Bastion instance will be included to provide access to the Oracle VMs.
Before you start, make sure that Azure CLI is installed. For more information, see the [Azure CLI installation guide](/cli/azure/install-azure-cli).
Before you start, make sure that Azure CLI is installed. For more information, s
To install Oracle Data Guard, you need to create two Azure VMs on the same availability set: -- The primary VM (myVM1) has a running Oracle instance.-- The standby VM (myVM2) has the Oracle software installed only.
+- The primary VM (OracleVM1) has a running Oracle instance.
+- The standby VM (OracleVM2) has the Oracle software installed only.
-The Marketplace image that you use to create the VMs is Oracle:Oracle-Database-Ee:12.1.0.2:latest.
+The Marketplace image that you use to create the VMs is Oracle:oracle-database-19-3:oracle-database-19-0904:latest.
> [!NOTE] > Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) - ### Sign in to Azure - Sign in to your Azure subscription by using the [az login](/cli/azure/reference-index) command and follow the on-screen directions.- ```azurecli az login ```
+### Set environment variables
+
+Adjust the **LOCATION** variable for your environment.
+```azurecli
+LOCATION=eastus
+RESOURCE_GROUP="Oracle-Lab"
+VM_USERNAME="azureuser"
+VM_PASSWORD="OracleLab123"
+VNET_NAME="${RESOURCE_GROUP}VNet"
+```
+
+### Enable the Azure CLI Bastion extension
+
+Include the Bastion extension.
+```azurecli
+az extension add \
+ --name bastion
+```
+ ### Create a resource group Create a resource group by using the [az group create](/cli/azure/group) command. An Azure resource group is a logical container in which Azure resources are deployed and managed.
-The following example creates a resource group named `myResourceGroup` in the `westus` location:
+```azurecli
+az group create \
+ --name $RESOURCE_GROUP \
+ --location $LOCATION
+```
+
+### Create a virtual network (VNet) with 2 subnets
+Creating a virtual network where we will connect all compute services. One subnet will host Bastion, an Azure service that protects your databases from public access. The second subnet will host the 2 Oracle database VMs. You will also create a network security group that all services will reference to determine what ports are publicly exposed. Only port 443 will be exposed. The Bastion service will open this port automatically when that service is created.
```azurecli
-az group create --name myResourceGroup --location westus
+az network vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --name $VNET_NAME \
+ --address-prefix "10.0.0.0/16"
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name AzureBastionSubnet \
+ --vnet-name $VNET_NAME \
+ --address-prefixes 10.0.0.0/24
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name OracleSubnet \
+ --vnet-name $VNET_NAME \
+ --address-prefixes 10.0.1.0/24
+az network nsg create \
+ --name OracleVM-NSG \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION
``` ### Create an availability set- Creating an availability set is optional, but we recommend it. For more information, see [Azure availability sets guidelines](/previous-versions/azure/virtual-machines/windows/infrastructure-example). ```azurecli az vm availability-set create \
- --resource-group myResourceGroup \
- --name myAvailabilitySet \
- --platform-fault-domain-count 2 \
- --platform-update-domain-count 2
+ --resource-group $RESOURCE_GROUP \
+ --name OracleVMAvailabilitySet \
+ --platform-fault-domain-count 2 \
+ --platform-update-domain-count 2
```
-### Create a virtual machine
+### Create two virtual machines
-Create a VM by using the [az vm create](/cli/azure/vm) command.
+Create two VMs by using the [az vm create](/cli/azure/vm) command.
-The following example creates two VMs named `myVM1` and `myVM2`. It also creates SSH keys, if they do not already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option.
+The following example creates two VMs named `OracleVM1` and `OracleVM2`.
-> [!NOTE]
-> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
--
-Create myVM1 (primary):
+Create OracleVM1 (primary):
+```azurecli
+az vm create \
+ --resource-group $RESOURCE_GROUP \
+ --name OracleVM1 \
+ --availability-set OracleVMAvailabilitySet \
+ --image Oracle:oracle-database-19-3:oracle-database-19-0904:latest \
+ --size Standard_DS1_v2 \
+ --authentication-type password \
+ --admin-username $VM_USERNAME \
+ --admin-password $VM_PASSWORD \
+ --vnet-name $VNET_NAME \
+ --subnet OracleSubnet \
+ --nsg OracleVM-NSG \
+ --os-disk-size-gb 32
+```
+Create OracleVM2 (standby):
```azurecli az vm create \
- --resource-group myResourceGroup \
- --name myVM1 \
- --availability-set myAvailabilitySet \
- --image Oracle:Oracle-Database-Ee:12.1.0.2:latest \
- --size Standard_DS1_v2 \
- --admin-username azureuser \
- --generate-ssh-keys \
+ --resource-group $RESOURCE_GROUP \
+ --name OracleVM2 \
+ --availability-set OracleVMAvailabilitySet \
+ --image Oracle:oracle-database-19-3:oracle-database-19-0904:latest \
+ --size Standard_DS1_v2 \
+ --authentication-type password \
+ --admin-username $VM_USERNAME \
+ --admin-password $VM_PASSWORD \
+ --vnet-name $VNET_NAME \
+ --subnet OracleSubnet \
+ --nsg OracleVM-NSG \
+ --os-disk-size-gb 32
```
-After you create the VM, Azure CLI shows information similar to the following example. Note the value of `publicIpAddress`. You use this address to access the VM.
+### Create the Azure Bastion Service
-```output
-{
- "fqdns": "",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
- "location": "westus",
- "macAddress": "00-0D-3A-36-2F-56",
- "powerState": "VM running",
- "privateIpAddress": "10.0.0.4",
- "publicIpAddress": "13.64.104.241",
- "resourceGroup": "myResourceGroup"
-}
-```
+Azure Bastion provides a secure tunnel to all services hosted within the virtual network. It serves as a Jump Box to eliminate direct access to your Oracle databases.
-Create myVM2 (standby):
+Create a public IP to access the Bastion service:
```azurecli
-az vm create \
- --resource-group myResourceGroup \
- --name myVM2 \
- --availability-set myAvailabilitySet \
- --image Oracle:Oracle-Database-Ee:12.1.0.2:latest \
- --size Standard_DS1_v2 \
- --admin-username azureuser \
- --generate-ssh-keys \
+az network public-ip create \
+ --resource-group $RESOURCE_GROUP \
+ --name OracleLabBastionPublicIP \
+ --sku Standard
+```
+```azurecli
+Create the Bastion service:
+az network bastion create \
+ --location $LOCATION \
+ --name OracleLabBastion \
+ --public-ip-address OracleLabBastionPublicIP \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --sku basic
```
-Note the value of `publicIpAddress` after you create myVM2.
+### Connect to the virtual machine
-### Open the TCP port for connectivity
+We will access the OracleVM1 using the Bastion service from the Azure portal by navigating a web browser to:
+https://portal.azure.com
-This step configures external endpoints, which allow remote access to the Oracle database.
+In the search textbox at the top of the window, search for OracleVM1 and click it from the list to launch.
-Open the port for myVM1:
+![Screenshot of the search window.](./media/configure-oracle-dataguard/search-oraclevm1.png)
-```azurecli
-az network nsg rule create --resource-group myResourceGroup\
- --nsg-name myVM1NSG --name allow-oracle\
- --protocol tcp --direction inbound --priority 999 \
- --source-address-prefix '*' --source-port-range '*' \
- --destination-address-prefix '*' --destination-port-range 1521 --access allow
-```
+At the top of the screen, click Connect and select Bastion.
-The result should look similar to the following response:
+![Screenshot of connect via Bastion.](./media/configure-oracle-dataguard/connect-bastion.png)
-```output
-{
- "access": "Allow",
- "description": null,
- "destinationAddressPrefix": "*",
- "destinationPortRange": "1521",
- "direction": "Inbound",
- "etag": "W/\"bd77dcae-e5fd-4bd6-a632-26045b646414\"",
- "id": "/subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myVmNSG/securityRules/allow-oracle",
- "name": "allow-oracle",
- "priority": 999,
- "protocol": "Tcp",
- "provisioningState": "Succeeded",
- "resourceGroup": "myResourceGroup",
- "sourceAddressPrefix": "*",
- "sourcePortRange": "*"
-}
-```
-
-Open the port for myVM2:
+Enter the Username and Password and click the Connect button.
-```azurecli
-az network nsg rule create --resource-group myResourceGroup\
- --nsg-name myVM2NSG --name allow-oracle\
- --protocol tcp --direction inbound --priority 999 \
- --source-address-prefix '*' --source-port-range '*' \
- --destination-address-prefix '*' --destination-port-range 1521 --access allow
-```
+![Screenshot of connect via Bastion with credentials.](./media/configure-oracle-dataguard/connect-bastion-credentials.png)
-### Connect to the virtual machine
+This will open a new tab with a secure connection to your virtual machine where the Oracle software is already installed from an Azure Marketplace image.
-Use the following command to create an SSH session with the virtual machine. Replace the IP address with the `publicIpAddress` value for your virtual machine.
+![Screenshot of connect via Bastion on browser.](./media/configure-oracle-dataguard/connect-bastion-browser-tab.png)
+
+### Configure OracleVM1 (primary)
+```bash
+sudo systemctl stop firewalld
+sudo systemctl disable firewalld
+```
-```bash
-ssh azureuser@<publicIpAddress>
+Set the **oracle** user password.
+```bash
+sudo passwd oracle
```
+Enter the azureuser password: **OracleLab123**
+Change the **oracle** user password to: **OracleLab123** (enter again to verify)
-### Create the database on myVM1 (primary)
+### Create the database on OracleVM1 (primary)
The Oracle software is already installed on the Marketplace image, so the next step is to install the database. Switch to the Oracle superuser:- ```bash sudo su - oracle ```- Create the database:- ```bash dbca -silent \ -createDatabase \
+ -datafileDestination /u01/app/oracle/cdb1 \
-templateName General_Purpose.dbc \ -gdbname cdb1 \ -sid cdb1 \ -responseFile NO_VALUE \ -characterSet AL32UTF8 \
- -sysPassword OraPasswd1 \
- -systemPassword OraPasswd1 \
+ -sysPassword OracleLab123 \
+ -systemPassword OracleLab123 \
-createAsContainerDatabase true \ -numberOfPDBs 1 \ -pdbName pdb1 \
- -pdbAdminPassword OraPasswd1 \
+ -pdbAdminPassword OracleLab123 \
-databaseType MULTIPURPOSE \ -automaticMemoryManagement false \
- -storageType FS \
- -ignorePreReqs
+ -storageType FS
``` Outputs should look similar to the following response:- ```output Copying database files 1% complete
Creating Pluggable Databases
100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb1/cdb1.log" for further details. ```- Set the ORACLE_SID and ORACLE_HOME variables: ```bash
- ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
- ORACLE_SID=cdb1; export ORACLE_SID
+$ ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1; export ORACLE_HOME
+$ ORACLE_SID=cdb1; export ORACLE_SID
``` Optionally, you can add ORACLE_HOME and ORACLE_SID to the /home/oracle/.bashrc file, so that these settings are saved for future logins: ```bash # add oracle home
-export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
+export ORACLE_HOME= /u01/app/oracle/product/19.0.0/dbhome_1
# add oracle sid export ORACLE_SID=cdb1 ```- ## Configure Data Guard- ### Enable archive log mode on myVM1 (primary)- ```bash sqlplus / as sysdba SQL> SELECT log_mode FROM v$database;- LOG_MODE NOARCHIVELOG- SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN; ```- Enable force logging, and make sure at least one log file is present:- ```bash SQL> ALTER DATABASE FORCE LOGGING; SQL> ALTER SYSTEM SWITCH LOGFILE; ```- Create standby redo logs, setting the same size and quantity as the primary database redo logs: ```bash
-SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/cdb1/standby_redo01.log') SIZE 200M;
-SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/cdb1/standby_redo02.log') SIZE 200M;
-SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/cdb1/standby_redo03.log') SIZE 200M;
-SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/cdb1/standby_redo04.log') SIZE 200M;
+SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/cdb1/standby_redo01.log') SIZE 200M;
+SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/cdb1/standby_redo02.log') SIZE 200M;
+SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/cdb1/standby_redo03.log') SIZE 200M;
+SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/cdb1/standby_redo04.log') SIZE 200M;
``` Turn on Flashback (which makes recovery a lot easier) and set STANDBY\_FILE\_MANAGEMENT to auto. Exit SQL*Plus after that. ```bash
-SQL> ALTER DATABASE FLASHBACK ON;
+SQL> ALTER SYSTEM SET db_recovery_file_dest_size=50G scope=both sid='*';
+SQL> ALTER SYSTEM SET db_recovery_file_dest='/u01/app/oracle/cdb1' scope=both sid='*';
+SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=BOTH;
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO; SQL> EXIT; ```
-### Set up service on myVM1 (primary)
+### Set up service on OracleVM1 (primary)
-Edit or create the tnsnames.ora file, which is in the $ORACLE_HOME\network\admin folder.
+Edit or create the **tnsnames.ora** file, which is in the **$ORACLE_HOME/network/admin** folder.
Add the following entries:
Add the following entries:
cdb1 = (DESCRIPTION = (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP)(HOST = myVM1)(PORT = 1521))
+ (ADDRESS = (PROTOCOL = TCP)(HOST = OracleVM1)(PORT = 1521))
) (CONNECT_DATA = (SID = cdb1) ) )- cdb1_stby = (DESCRIPTION = (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP)(HOST = myVM2)(PORT = 1521))
+ (ADDRESS = (PROTOCOL = TCP)(HOST = OracleVM2)(PORT = 1521))
) (CONNECT_DATA = (SID = cdb1)
cdb1_stby =
) ```
-Edit or create the listener.ora file, which is in the $ORACLE_HOME\network\admin folder.
+Edit or create the **listener.ora>** file, which is in the **$ORACLE_HOME/network/admin** folder.
Add the following entries:
Add the following entries:
LISTENER = (DESCRIPTION_LIST = (DESCRIPTION =
- (ADDRESS = (PROTOCOL = TCP)(HOST = myVM1)(PORT = 1521))
+ (ADDRESS = (PROTOCOL = TCP)(HOST = OracleVM1)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) )- SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = cdb1_DGMGRL)
- (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
+ (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1)
(SID_NAME = cdb1) ) )- ADR_BASE_LISTENER = /u01/app/oracle ```- Enable Data Guard Broker:- ```bash sqlplus / as sysdba SQL> ALTER SYSTEM SET dg_broker_start=true;
+SQL> CREATE pfile FROM spfile;
SQL> EXIT; ```
+Copy the parameter file to the standby server.
+```bash
+scp -p $ORACLE_HOME/dbs/initcdb1.ora oracle@OracleVM2:$ORACLE_HOME/dbs/
+```
Start the listener:
Start the listener:
lsnrctl start ```
-### Set up service on myVM2 (standby)
+### Set up service on OracleVM2 (standby)
-SSH to myVM2:
+Return to the tab with the Azure portal. Search for OracleVM2 and click it.
-```bash
-ssh azureuser@<publicIpAddress>
+![Screenshot of search for OracleVM2.](./media/configure-oracle-dataguard/search-oraclevm2.png)
+
+At the top of the screen, click Connect and select Bastion.
+
+![Screenshot of connecting to VM via Bastion.](./media/configure-oracle-dataguard/connect-bastion.png)
+
+Enter the Username and Password and click the Connect button.
+
+![Screenshot of connecting via Bastion with credentials.](./media/configure-oracle-dataguard/connect-bastion-credentials.png)
+
+### Disable the Firewall on OracleVM2 (standby)
+```bash
+sudo systemctl stop firewalld
+sudo systemctl disable firewalld
```
-Log in as Oracle:
+### Configure the environment for OracleVM1
+Set the **oracle** user password.
+```bash
+sudo passwd oracle
+```
+Enter the **azureuser** password: **OracleLab123**
+Change the **oracle** user password to: **OracleLab123** (enter again to verify)
+Switch to the **oracle** superuser:
```bash
-sudo su - oracle
+$ sudo su ΓÇô oracle
+```
+
+Set the ORACLE_SID and ORACLE_HOME variables:
+```bash
+ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1; export ORACLE_HOME
+ORACLE_SID=cdb1; export ORACLE_SID
+```
+
+Optionally, you can add ORACLE_HOME and ORACLE_SID to the **/home/oracle/.bashrc** file, so that these settings are saved for future logins:
+```bash
+# add oracle home
+export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
+# add oracle sid
+export ORACLE_SID=cdb1
```
-Edit or create the tnsnames.ora file, which is in the $ORACLE_HOME\network\admin folder.
+Edit or create the **tnsnames.ora** file, which is in the **$ORACLE_HOME/network/admin** folder.
Add the following entries:
Add the following entries:
cdb1 = (DESCRIPTION = (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP)(HOST = myVM1)(PORT = 1521))
+ (ADDRESS = (PROTOCOL = TCP)(HOST = OracleVM1)(PORT = 1521))
) (CONNECT_DATA = (SID = cdb1) ) )- cdb1_stby = (DESCRIPTION = (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP)(HOST = myVM2)(PORT = 1521))
+ (ADDRESS = (PROTOCOL = TCP)(HOST = OracleVM2)(PORT = 1521))
) (CONNECT_DATA = (SID = cdb1)
cdb1_stby =
) ```
-Edit or create the listener.ora file, which is in the $ORACLE_HOME\network\admin folder.
+Edit or create the **listener.ora** file, which is in the **$ORACLE_HOME/network/admin** folder.
Add the following entries:
Add the following entries:
LISTENER = (DESCRIPTION_LIST = (DESCRIPTION =
- (ADDRESS = (PROTOCOL = TCP)(HOST = myVM2)(PORT = 1521))
+ (ADDRESS = (PROTOCOL = TCP)(HOST = OracleVM2)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) )- SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = cdb1_DGMGRL)
- (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
+ (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1)
(SID_NAME = cdb1) ) )- ADR_BASE_LISTENER = /u01/app/oracle ```- Start the listener:- ```bash lsnrctl stop lsnrctl start ```
-### Restore the database to myVM2 (standby)
+### Restore the database to OracleVM2 (standby)
-Create the parameter file /tmp/initcdb1_stby.ora with the following contents:
+Create the parameter file **/tmp/initcdb1_stby.ora** with the following contents:
```bash *.db_name='cdb1' ```- Create folders: ```bash
-mkdir -p /u01/app/oracle/oradata/cdb1/pdbseed
-mkdir -p /u01/app/oracle/oradata/cdb1/pdb1
-mkdir -p /u01/app/oracle/fast_recovery_area/cdb1
-mkdir -p /u01/app/oracle/admin/cdb1/adump
+$ mkdir -p /u01/app/oracle/cdb1
+$ mkdir -p /u01/app/oracle/oradata/cdb1/pdbseed
+$ mkdir -p /u01/app/oracle/oradata/cdb1/pdb1
+$ mkdir -p /u01/app/oracle/fast_recovery_area/cdb1
+$ mkdir -p /u01/app/oracle/admin/cdb1/adump
``` Create a password file: ```bash
- orapwd file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwcdb1 password=OraPasswd1 entries=10
+$ orapwd file=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/orapwcdb1 password=OracleLab123 entries=10 format=
```
-Start the database on myVM2:
+Start the database on OracleVM2:
```bash export ORACLE_SID=cdb1 sqlplus / as sysdba-
+SQL> CREATE spfile from pfile;
SQL> STARTUP NOMOUNT PFILE='/tmp/initcdb1_stby.ora'; SQL> EXIT; ```
SQL> EXIT;
Restore the database by using the RMAN tool: ```bash
- rman TARGET sys/OraPasswd1@cdb1 AUXILIARY sys/OraPasswd1@cdb1_stby
+$ rman TARGET sys/OracleLab123@cdb1 AUXILIARY sys/OracleLab123@cdb1_stby
``` Run the following commands in RMAN:- ```bash DUPLICATE TARGET DATABASE FOR STANDBY
DUPLICATE TARGET DATABASE
SET db_unique_name='CDB1_STBY' COMMENT 'Is standby' NOFILENAMECHECK; ```- You should see messages similar to the following when the command is completed. Exit RMAN. ```output media recovery complete, elapsed time: 00:00:00
-Finished recover at 29-JUN-17
-Finished Duplicate Db at 29-JUN-17
+Finished recover at 29-JUN-22
+Finished Duplicate Db at 29-JUN-22
``` ```bash RMAN> EXIT; ```
-Optionally, you can add ORACLE_HOME and ORACLE_SID to the /home/oracle/.bashrc file, so that these settings are saved for future logins:
-
-```bash
-# add oracle home
-export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
-# add oracle sid
-export ORACLE_SID=cdb1
-```
- Enable Data Guard Broker: ```bash sqlplus / as sysdba
SQL> ALTER SYSTEM SET dg_broker_start=true;
SQL> EXIT; ```
-### Configure Data Guard Broker on myVM1 (primary)
+### Configure Data Guard Broker on OracleVM1 (primary)
Start Data Guard Manager and log in by using SYS and a password. (Do not use OS authentication.) Perform the following: ```bash
- dgmgrl sys/OraPasswd1@cdb1
-DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
-
+$ dgmgrl sys/OracleLab123@cdb1
+DGMGRL for Linux: Version 19.0.0.0 - 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.- Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. DGMGRL> CREATE CONFIGURATION my_dg_config AS PRIMARY DATABASE IS cdb1 CONNECT IDENTIFIER IS cdb1;
Database "cdb1_stby" added
DGMGRL> ENABLE CONFIGURATION; Enabled. ```- Review the configuration:- ```bash DGMGRL> SHOW CONFIGURATION;- Configuration - my_dg_config- Protection Mode: MaxPerformance Members: cdb1 - Primary database
- cdb1_stby - Physical standby database
-
+ cdb1_stby - Physical standby database
Fast-Start Failover: DISABLED- Configuration Status: SUCCESS (status updated 26 seconds ago) ```- You've completed the Oracle Data Guard setup. The next section shows you how to test the connectivity and switch over. ### Connect the database from the client machine
-Update or create the tnsnames.ora file on your client machine. This file is usually in $ORACLE_HOME\network\admin.
-
-Replace the IP addresses with your `publicIpAddress` values for myVM1 and myVM2:
+Update the **tnsnames.ora** file on your client machine. This file is usually in **$ORACLE_HOME/network/admin**.
```bash cdb1= (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP)
- (HOST=<myVM1 IP address>)
+ (HOST=OracleVM1)
(PORT=1521) ) (CONNECT_DATA=
cdb1=
(SERVICE_NAME=cdb1) ) )- cdb1_stby= (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP)
- (HOST=<myVM2 IP address>)
+ (HOST=OracleVM2)
(PORT=1521) ) (CONNECT_DATA=
cdb1_stby=
) ) ```- Start SQL*Plus: ```bash
-sqlplus sys/OraPasswd1@cdb1
-SQL*Plus: Release 12.2.0.1.0 Production on Wed May 10 14:18:31 2017
-
+$ sqlplus sys/OraPasswd1@cdb1
+SQL*Plus: Release 19.0.0.0 Production on Wed May 10 14:18:31 2022
Copyright (c) 1982, 2016, Oracle. All rights reserved.- Connected to:
-Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
+Oracle Database 12c Enterprise Edition Release 19.0.0.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options- SQL> ``` ## Test the Data Guard configuration
-### Switch over the database on myVM1 (primary)
+### Switch over the database on OracleVM1 (primary)
To switch from primary to standby (cdb1 to cdb1_stby): ```bash
-dgmgrl sys/OraPasswd1@cdb1
-DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
-
+$ dgmgrl sys/OracleLab123@cdb1
+DGMGRL for Linux: Version 19.0.0.0 - 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.- Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. DGMGRL> SWITCHOVER TO cdb1_stby;
Database mounted.
Switchover succeeded, new primary is "cdb1_stby" DGMGRL> ```- You can now connect to the standby database.- Start SQL*Plus: ```bash-
-sqlplus sys/OraPasswd1@cdb1_stby
-SQL*Plus: Release 12.2.0.1.0 Production on Wed May 10 14:18:31 2017
-
+$ sqlplus sys/OracleLab123@cdb1_stby
+SQL*Plus: Release 19.0.0.0 Production on Wed May 10 14:18:31 2022
Copyright (c) 1982, 2016, Oracle. All rights reserved.- Connected to:
-Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
+Oracle Database 19c Enterprise Edition Release 19.0.0.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options- SQL> ```
-### Switch over the database on myVM2 (standby)
+### Switch over the database on OracleVM2 (standby)
-To switch over, run the following on myVM2:
+To switch over, run the following on OracleVM2:
```bash
-dgmgrl sys/OraPasswd1@cdb1_stby
-DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
-
+$ dgmgrl sys/OracleLab123@cdb1_stby
+DGMGRL for Linux: Version 190.0.0.0 - 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.- Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. DGMGRL> SWITCHOVER TO cdb1;
ORACLE instance started.
Database mounted. Switchover succeeded, new primary is "cdb1" ```- Once again, you should now be able to connect to the primary database.- Start SQL*Plus: ```bash-
-sqlplus sys/OraPasswd1@cdb1
-SQL*Plus: Release 12.2.0.1.0 Production on Wed May 10 14:18:31 2017
-
+$ sqlplus sys/OracleLab123@cdb1
+SQL*Plus: Release 19.0.0.0 Production on Wed May 10 14:18:31 2022
Copyright (c) 1982, 2016, Oracle. All rights reserved.- Connected to:
-Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
+Oracle Database 19c Enterprise Edition Release 19.0.0.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options- SQL> ```- You've finished the installation and configuration of Data Guard on Oracle Linux. - ## Delete the virtual machine- When you no longer need the VM, you can use the following command to remove the resource group, VM, and all related resources: ```azurecli
-az group delete --name myResourceGroup
+az group delete --name $RESOURCE_GROUP
``` ## Next steps- [Tutorial: Create highly available virtual machines](../../linux/create-cli-complete.md)- [Explore VM deployment Azure CLI samples](https://github.com/Azure-Samples/azure-cli-samples/tree/master/virtual-machine)
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
### Requirements and prefix readiness
-### Requirements and prefix readiness
- * The address range must be owned by you and registered under your name with the one of the 5 major Regional Internet Registries: * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/)
The following steps show the steps required to prepare sample customer range (1.
> [!NOTE] > Execute the following commands in PowerShell with OpenSSL installed. -
-
+
1. A [self-signed X509 certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) must be created to add to the Whois/RDAP record for the prefix. For information about RDAP, see the [ARIN](https://www.arin.net/resources/registry/whois/rdap/), [RIPE](https://www.ripe.net/manage-ips-and-asns/db/registration-data-access-protocol-rdap), [APNIC](https://www.apnic.net/about-apnic/whois_search/about/rdap/), and [AFRINIC](https://www.afrinic.net/whois/rdap) sites. An example utilizing the OpenSSL toolkit is shown below. The following commands generate an RSA key pair and create an X509 certificate using the key pair that expires in six months:
virtual-network Create Custom Ip Address Prefix Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-cli.md
The steps in this article detail the process to:
> [!IMPORTANT] > Onboarded custom IPv6 address prefixes are have several unique attributes which make them different than custom IPv4 address prefixes.
-* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Note that global ranges must be /48 in size, while regional ranges must always be /64 size.
+* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size.
* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.37 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed. - Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.-- A customer owned IPv4 range to provision in Azure.
+- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
- A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours. > [!NOTE]
The steps in this article detail the process to:
## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Please refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range.
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range.
## Provisioning for IPv6
-The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Note that some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-cli.md) to focus on the differences between IPv4 and IPv6.
+The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-cli.md) to focus on the differences between IPv4 and IPv6.
### Create a resource group and specify the prefix and authorization messages
Create a resource group in the desired location for provisioning the global rang
### Provision a global custom IPv6 address prefix
-The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-authorization-message` and `-signed-message` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - CLI](create-custom-ip-address-prefix-cli.md).) Note that no zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-authorization-message` and `-signed-message` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - CLI](create-custom-ip-address-prefix-cli.md).) No zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
```azurecli-interactive byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|2a05:f500:2::/48|yyyymmdd"
az network custom-ip prefix update \
--state commission ```
-It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, note that this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+> [!NOTE]
+> The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes.
+
+It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+
+> [!IMPORTANT]
+> As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
## Next steps
virtual-network Create Custom Ip Address Prefix Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-portal.md
+
+ Title: Create a custom IPv6 address prefix - Azure portal
+
+description: Learn about how to onboard a custom IPv6 address prefix using the Azure portal
++++ Last updated : 05/03/2022+++
+# Create a custom IPv6 address prefix using the Azure portal
+
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+
+## Differences between using BYOIPv4 and BYOIPv6
+
+> [!IMPORTANT]
+> Onboarded custom IPv6 address prefixes are have several unique attributes which make them different than custom IPv4 address prefixes.
+
+* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size.
+
+* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
+
+* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-portal.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range.
+
+## Provisioning for IPv6
+
+The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-portal.md) to focus on the differences between IPv4 and IPv6.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+
+### Provision a global custom IPv6 address prefix
+
+The following flow creates a custom IP prefix in the specified region and resource group. No zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
+
+### Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+### Create and provision a custom IP address prefix
+
+1. In the search box at the top of the portal, enter **Custom IP**.
+
+2. In the search results, select **Custom IP Prefixes**.
+
+3. Select **+ Create**.
+
+4. In **Create a custom IP prefix**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription |
+ | Resource group | Select **Create new**. </br> Enter **myResourceGroup**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myCustomIPv6GlobalPrefix**. |
+ | Region | Select **West US 2**. |
+ | IP Version | Select IPv6. |
+ | IP prefix range | Select Global. |
+ | Global IPv6 Prefix (CIDR) | Enter **2a05:f500:2::/48**. |
+ | ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+ | Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
+ | Availability Zones | Select **Zone-redundant**. |
+
+ :::image type="content" source="./media/create-custom-ip-address-prefix-ipv6/create-custom-ipv6-prefix.png" alt-text="Screenshot of create custom IP prefix page in Azure portal.":::
+
+5. Select the **Review + create** tab or the blue **Review + create** button at the bottom of the page.
+
+6. Select **Create**.
+
+The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
+
+### Provision a regional custom IPv6 address prefix
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+
+In the same **Create a custom IP prefix** page as before, enter or select the following information:
+
+| Setting | Value |
+| - | -- |
+| **Project details** | |
+| Subscription | Select your subscription |
+| Resource group | Select **Create new**. </br> Enter **myResourceGroup**. </br> Select **OK**. |
+| **Instance details** | |
+| Name | Enter **myCustomIPv6RegionalPrefix**. |
+| Region | Select **West US 2**. |
+| IP Version | Select IPv6. |
+| IP prefix range | Select Regional. |
+| Custom IP prefix parent | Select myCustomIPv6GlobalPrefix (2a05:f500:2::/48) from the drop down menu. |
+| Regional IPv6 Prefix (CIDR) | Enter **2a05:f500:2:1::/64**. |
+| ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+| Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
+| Availability Zones | Select **Zone-redundant**. |
+
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+
+> [!IMPORTANT]
+> Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range.
+
+### Commission the custom IPv6 address prefixes
+
+When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IPv6 prefixes in their respective regions. Create public IPv6 prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IPv6 prefix and test connectivity to the IPs within the region. Repeat for each regional custom IPv6 prefix.
+3. After all regional custom IPv6 prefixes (and derived prefixes/IPs) have been verified to work as expected, commission the global custom IPv6 prefix, which will advertise the larger range to the Internet.
+
+To commission a custom IPv6 prefix (regional or global) using the portal:
+
+1. In the search box at the top of the portal, enter **Custom IP** and select **Custom IP Prefixes**.
+
+2. Verify the custom IPv6 prefix is in a **Provisioned** state.
+
+3. In **Custom IP Prefixes**, select the desired custom IPv6 prefix.
+
+4. In **Overview** page of the custom IPv6 prefix, select the **Commission** button near the top of the screen. If the range is global it will begin advertising from the Microsoft WAN. If the range is regional it will advertise only from the specific region.
+
+Using the example ranges above, the sequence would be to first commission myCustomIPv6RegionalPrefix, followed by a commission of myCustomIPv6GlobalPrefix.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes.
+
+It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+
+> [!IMPORTANT]
+> As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md).
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
+
+- To create a custom IP address prefix using the Azure CLI, see [Create custom IP address prefix using the Azure CLI](create-custom-ip-address-prefix-cli.md).
+
+- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
The steps in this article detail the process to:
> [!IMPORTANT] > Onboarded custom IPv6 address prefixes are have several unique attributes which make them different than custom IPv4 address prefixes.
-* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Note that global ranges must be /48 in size, while regional ranges must always be /64 size.
+* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size.
* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
The steps in this article detail the process to:
- Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). - Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.-- A customer owned IP range to provision in Azure.
- - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
If you choose to install and use PowerShell locally, this article requires the A
## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Please refer to the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md#pre-provisioning-steps) for details. Note all these steps should be completed for the IPv6 global (parent) range.
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md#pre-provisioning-steps) for details. Note all these steps should be completed for the IPv6 global (parent) range.
## Provisioning for IPv6
-The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Note that some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md) to focus on the differences between IPv4 and IPv6.
+The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md) to focus on the differences between IPv4 and IPv6.
### Create a resource group and specify the prefix and authorization messages
New-AzResourceGroup @rg
### Provision a global custom IPv6 address prefix
-The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-AuthorizationMessage` and `-SignedMessage` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - PowerShell](create-custom-ip-address-prefix-powershell.md).) Note that no zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-AuthorizationMessage` and `-SignedMessage` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - PowerShell](create-custom-ip-address-prefix-powershell.md).) No zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
```azurepowershell-interactive $prefix =@{
Followed by:
```azurepowershell-interactive Update-AzCustomIpPrefix -ResourceId $myCustomIPv6GlobalPrefix.Id -Commission ```
+> [!NOTE]
+> The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes.
+
+It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
-It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, note that this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+> [!IMPORTANT]
+> As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
## Next steps - To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md). -- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
3. Select **+ Create**.
-4. In **Create a custom IP prefix**, enter or select the following information in the **Basics** tab:
+4. In **Create a custom IP prefix**, enter or select the following information:
| Setting | Value | | - | -- |
Sign in to the [Azure portal](https://portal.azure.com).
| **Instance details** | | | Name | Enter **myCustomIPPrefix**. | | Region | Select **West US 2**. |
- | Availability Zones | Select **Zone-redundant**. |
+ | IP Version | Select IPv4. |
| IPv4 Prefix (CIDR) | Enter **1.2.3.0/24**. | | ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
- | Signed message | Paste in the output of **$byoipauthsigned** from the earlier section. |
+ | Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
+ | Availability Zones | Select **Zone-redundant**. |
:::image type="content" source="./media/create-custom-ip-address-prefix-portal/create-custom-ip-prefix.png" alt-text="Screenshot of create custom IP prefix page in Azure portal.":::
When you create a prefix, you must create static IP addresses from the prefix. I
When the custom IP prefix is in **Provisioned** state, update the prefix to begin the process of advertising the range from Azure. 1. In the search box at the top of the portal, enter **Custom IP** and select **Custom IP Prefixes**.
-1. Verify, and wait if necessary, for **myCustomIPPrefix** to be is listed in a **Provisioned** state.
-1. In **Custom IP Prefixes**, select **myCustomIPPrefix**.
+2. Verify, and wait if necessary, for **myCustomIPPrefix** to be is listed in a **Provisioned** state.
+
+3. In **Custom IP Prefixes**, select **myCustomIPPrefix**.
-1. In **Overview** of **myCustomIPPrefix**, select the **Commission** dropdown menu and choose **Globally**.
+4. In **Overview** of **myCustomIPPrefix**, select the **Commission** dropdown menu and choose **Globally**.
The operation is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. Initially, the status will show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in the **Commissioning** status.
vpn-gateway Nva Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nva-work-remotely-support.md
description: Learn about the things that you should take into consideration work
Previously updated : 09/08/2020 Last updated : 05/05/2023 # Working remotely: Network Virtual Appliance (NVA) considerations for remote work
->[!NOTE]
->This article describes how you can leverage Network Virtual Appliances, Azure, Microsoft network, and the Azure partner ecosystem to work remotely and mitigate network issues that you are facing because of COVID-19 crisis.
->
- Some Azure customers utilize third-party Network Virtual Appliances (NVAs) from Azure Marketplace to provide critical services such as Point-to-site VPN for their employees who are working from home during the COVID-19 epidemic. This article outlines some high-level guidance to take into consideration when leveraging NVAs in Azure to provide remote access solutions. ## NVA performance considerations
vpn-gateway Vpn Gateway Howto Aws Bgp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-aws-bgp.md
Next, you'll connect your AWS tunnels to Azure. For each of the four tunnels, yo
:::image type="content" source="./media/vpn-gateway-howto-aws-bgp/create-connection.png" alt-text="Modifying connection" ::: 9. From the **Connections** page for your VPN gateway, select the connection you created and navigate to the **Configuration** page.
-10. Select **ResponderOnly** for the **Connection Mode** and select **Save**.
+10. Select **InitiatorOnly** for the **Connection Mode** and select **Save**.
:::image type="content" source="./media/vpn-gateway-howto-aws-bgp/responder-only.png" alt-text="Make connections ResponderOnly" :::