Updates from: 01/05/2022 02:10:16
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
Follow these steps to access the **Mappings** feature of user provisioning:
![Use Attribute Mapping to configure attribute mappings for apps](./media/customize-application-attributes/22.png) In this screenshot, you can see that the **Username** attribute of a managed object in Salesforce is populated with the **userPrincipalName** value of the linked Azure Active Directory Object.
+
+ > [!NOTE]
+ > Clearing **Create** doesn't affect existing users. If **Create** isn't selected, you can't create new users.
1. Select an existing **Attribute Mapping** to open the **Edit Attribute** screen. Here you can edit the user attributes that flow between Azure AD and the target application.
active-directory Msal Logging Js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-logging-js.md
Previously updated : 01/25/2021 Last updated : 12/21/2021
## Configure logging in MSAL.js
-Enable logging in MSAL.js (JavaScript) by passing a logger object during the configuration for creating a `UserAgentApplication` instance. This logger object has the following properties:
+Enable logging in MSAL.js (JavaScript) by passing a loggerOptions object during the configuration for creating a `PublicClientApplication` instance. The only required config parameter is the client ID of the application. Everything else is optional, but may be required depending on your tenant and application model.
-- `localCallback`: a Callback instance that can be provided by the developer to consume and publish logs in a custom manner. Implement the localCallback method depending on how you want to redirect logs.-- `level` (optional): the configurable log level. The supported log levels are: `Error`, `Warning`, `Info`, and `Verbose`. The default is `Info`.
+The loggerOptions object has the following properties:
+
+- `loggerCallback`: a Callback function that can be provided by the developer to handle the logging of MSAL statements in a custom manner. Implement the `loggerCallback` function depending on how you want to redirect logs. The loggerCallback function has the following format ` (level: LogLevel, message: string, containsPii: boolean): void`
+ - The supported log levels are: `Error`, `Warning`, `Info`, and `Verbose`. The default is `Info`.
- `piiLoggingEnabled` (optional): if set to true, logs personal and organizational data. By default this is false so that your application doesn't log personal data. Personal data logs are never written to default outputs like Console, Logcat, or NSLog.-- `correlationId` (optional): a unique identifier, used to map the request with the response for debugging purposes. Defaults to RFC4122 version 4 guid (128 bits). ```javascript
-function loggerCallback(logLevel, message, containsPii) {
- console.log(message);
-}
-
-var msalConfig = {
+const msalConfig = {
auth: {
- clientId: "<Enter your client id>",
+ clientId: "enter_client_id_here",
+ authority: "https://login.microsoftonline.com/common",
+ knownAuthorities: [],
+ cloudDiscoveryMetadata: "",
+ redirectUri: "enter_redirect_uri_here",
+ postLogoutRedirectUri: "enter_postlogout_uri_here",
+ navigateToLoginRequestUrl: true,
+ clientCapabilities: ["CP1"]
+ },
+ cache: {
+ cacheLocation: "sessionStorage",
+ storeAuthStateInCookie: false,
+ secureCookies: false
}, system: {
- logger: new Msal.Logger(
- loggerCallback , {
- level: Msal.LogLevel.Verbose,
- piiLoggingEnabled: false,
- correlationId: '1234'
- }
- )
- }
+ loggerOptions: {
+ loggerCallback: (level: LogLevel, message: string, containsPii: boolean): void => {
+ if (containsPii) {
+ return;
+ }
+ switch (level) {
+ case LogLevel.Error:
+ console.error(message);
+ return;
+ case LogLevel.Info:
+ console.info(message);
+ return;
+ case LogLevel.Verbose:
+ console.debug(message);
+ return;
+ case LogLevel.Warning:
+ console.warn(message);
+ return;
+ }
+ },
+ piiLoggingEnabled: false
+ },
+ windowHashTimeout: 60000,
+ iframeHashTimeout: 6000,
+ loadFrameTimeout: 0,
+ asyncPopups: false
+ };
}
-var UserAgentApplication = new Msal.UserAgentApplication(msalConfig);
+const msalInstance = new PublicClientApplication(msalConfig);
``` ## Next steps
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
There are certain sets of claims that define how and when they're used in tokens
| verified_secondary_email | | wids | | win_ver |
+| nickname |
### Table 2: SAML restricted claim set
active-directory Compare With B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/compare-with-b2c.md
The following table gives a detailed comparison of the scenarios you can enable
| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](conditional-access.md)). | Managed by the organization via Conditional Access and Identity Protection. | | **Branding** | Host/inviting organization's brand is used. | Fully customizable branding per application or organization. | | **Billing model** | [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2B setup details](external-identities-pricing.md)) | [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2C setup details](../../active-directory-b2c/billing.md)) |
-| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
+| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Supported Azure AD features](../../active-directory-b2c/supported-azure-ad-features.md), [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
Secure and manage customers and partners beyond your organizational boundaries with Azure AD External Identities.
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
We tend to think that administrator accounts are the only accounts that need ext
After these attackers gain access, they can request access to privileged information for the original account holder. They can even download the entire directory to do a phishing attack on your whole organization.
-One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for another authentication whenever necessary. Users will be prompted primarily when they authenticate using a new device from a new location, or when doing critical roles and tasks. This functionality protects all applications registered with Azure AD including SaaS applications.
+One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for another authentication whenever necessary. Azure AD decides when a user will be prompted for Multi-Factor Authentication, based on factors such as location, device, role and task. This functionality protects all applications registered with Azure AD including SaaS applications.
### Blocking legacy authentication
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/secure-hybrid-access.md
Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can pro
In addition to [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md), Microsoft partners with third-party providers to enable secure access to your on-premises applications and applications that use legacy authentication.
-![Image shows secure hybrid access with app proxy and partners](./media/secure-hybrid-access/secure-hybrid-access.png)
+![Illustration of Secure Hybrid Access partner integrations and Application Proxy providing access to legacy and on-premises applications after authentication with Azure AD.](./media/secure-hybrid-access/secure-hybrid-access.png)
The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
active-directory Tutorial Log Analytics Wizard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/tutorial-log-analytics-wizard.md
Title: Configure the log analytics wizard in Azure AD | Microsoft Docs
+ Title: Configure a log analytics workspace in Azure AD | Microsoft Docs
description: Learn how to configure log analytics.
-# Tutorial: Configure the log analytics wizard
+# Tutorial: Configure a log analytics workspace
In this tutorial, you learn how to:
This procedure shows how to add a query to an existing workbook template. The ex
Advance to the next article to learn how to manage device identities by using the Azure portal. > [!div class="nextstepaction"]
-> [Monitoring](overview-monitoring.md)
+> [Monitoring](overview-monitoring.md)
active-directory Workplace By Facebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workplace-by-facebook-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to Workplace by Facebook's authorization page. Input your Workplace by Facebook username and click on the **Continue** button. Click **Test Connection** to ensure Azure AD can connect to Workplace by Facebook. If the connection fails, ensure your Workplace by Facebook account has Admin permissions and try again.
+5. Ensure the "Tenant URL" section is populated with the correct endpoint: https://scim.workplace.com/ .Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to Workplace by Facebook's authorization page. Input your Workplace by Facebook username and click on the **Continue** button. Click **Test Connection** to ensure Azure AD can connect to Workplace by Facebook. If the connection fails, ensure your Workplace by Facebook account has Admin permissions and try again.
![Screenshot shows Admin Credentials dialog box with an Authorize option.](./media/workplace-by-facebook-provisioning-tutorial/provisionings.png)
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
AKS provides a single-tenant control plane, with a dedicated API server, schedul
While you don't need to configure components (like a highly available *etcd* store) with this managed control plane, you can't access the control plane directly. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs through Azure Monitor logs.
-To configure or directly access a control plane, deploy your own Kubernetes cluster using [aks-engine][aks-engine].
+To configure or directly access a control plane, deploy a self-managed Kubernetes cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
For associated best practices, see [Best practices for cluster security and upgrades in AKS][operator-best-practices-cluster-security].
The Azure VM size for your nodes defines the storage CPUs, memory, size, and typ
In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts (including [Azure reservations][reservation-discounts]) are automatically applied.
-Deploy your own Kubernetes cluster with [aks-engine][aks-engine] if using a different host OS, container runtime, or including different custom packages. The upstream `aks-engine` releases features and provides configuration options ahead of support in AKS clusters. So, if you wish to use a container runtime other than `containerd` or Docker, you can run `aks-engine` to configure and deploy a Kubernetes cluster that meets your current needs.
+If you need advanced configuration and control on your Kubernetes node container runtime and OS, you can deploy a self-managed cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
### Resource reservations
This article covers some of the core Kubernetes components and how they apply to
<!-- EXTERNAL LINKS --> [aks-engine]: https://github.com/Azure/aks-engine
+[cluster-api-provider-azure]: https://github.com/kubernetes-sigs/cluster-api-provider-azure
[kubernetes-pods]: https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ [kubernetes-pod-lifecycle]: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/ [kubernetes-deployments]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
This article introduces the core concepts that secure your applications in AKS:
## Build Security
-As the entry point for the Supply Chain it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing off a build because it has a high vulnerability, as that will break development, it is about looking at the "Vendor Status" to segment based on vulnerabilities that are actionable by the development teams. Also leverage "Grace Periods" to allow developers time to remediate identified issues.
+As the entry point for the Supply Chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that will break development. It is about looking at the "Vendor Status" to segment based on vulnerabilities that are actionable by the development teams. Also leverage "Grace Periods" to allow developers time to remediate identified issues.
## Registry Security
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app.md
aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.18.10
## Install Open Liberty Operator
-After creating and connecting to the cluster, install the [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator/tree/master/deploy/releases/0.7.1) by running the following commands.
+After creating and connecting to the cluster, install the [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator/tree/main/deploy/releases/0.8.0#option-2-install-using-kustomize) by running the following commands.
```azurecli-interactive
-OPERATOR_NAMESPACE=default
-WATCH_NAMESPACE='""'
-
-# Install Custom Resource Definitions (CRDs) for OpenLibertyApplication
-kubectl apply -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.1/openliberty-app-crd.yaml
-
-# Install cluster-level role-based access
-curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.1/openliberty-app-cluster-rbac.yaml \
- | sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/" \
- | kubectl apply -f -
-
-# Install the operator
-curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.1/openliberty-app-operator.yaml \
- | sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${WATCH_NAMESPACE}/" \
- | kubectl apply -n ${OPERATOR_NAMESPACE} -f -
+# Install Open Liberty Operator
+mkdir -p overlays/watch-all-namespaces
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/overlays/watch-all-namespaces/olo-all-namespaces.yaml -q -P ./overlays/watch-all-namespaces
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/overlays/watch-all-namespaces/cluster-roles.yaml -q -P ./overlays/watch-all-namespaces
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/overlays/watch-all-namespaces/kustomization.yaml -q -P ./overlays/watch-all-namespaces
+mkdir base
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/base/kustomization.yaml -q -P ./base
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/base/open-liberty-crd.yaml -q -P ./base
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/base/open-liberty-operator.yaml -q -P ./base
+kubectl apply -k overlays/watch-all-namespaces
``` ## Build application image
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
Each number in the version indicates general compatibility with the previous ver
Aim to run the latest patch release of the minor version you're running. For example, your production cluster is on **`1.17.7`**. **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported.
-## Kubernetes Alias Minor Version
+## Kubernetes Alias Minor Version (Preview)
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!NOTE] > Alias Minor Version requires Azure CLI version 2.31.0 or above with the aks-preview extension installed. Please use `az upgrade` to install the latest version of the CLI.
+You will need the *aks-preview* Azure CLI extension version 0.5.49 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+ Azure Kubernetes Service allows for you to create a cluster without specifiying the exact patch version. When creating a cluster without specifying a patch, the cluster will run the minor version's latest patch. For example, if you create a cluster with **`1.21`**, your cluster will be running **`1.21.7`**, which is the latest patch version of *1.21*. To see what patch you are on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version.
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
<!-- LINKS - Internal --> [aks-upgrade]: upgrade-cluster.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az-extension-update
[az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions [preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ [get-azaksversion]: /powershell/module/az.aks/get-azaksversion
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-deploy-application.md
Get the ACR login server name using the [Get-AzContainerRegistry][get-azcontaine
-The sample manifest file from the git repo cloned in the first tutorial uses the login server name of *microsoft*. Make sure that you're in the cloned *azure-voting-app-redis* directory, then open the manifest file with a text editor, such as `vi`:
+The sample manifest file from the git repo cloned in the first tutorial uses the images from Microsoft Container Registry (*mcr.microsoft.com*). Make sure that you're in the cloned *azure-voting-app-redis* directory, then open the manifest file with a text editor, such as `vi`:
```console vi azure-vote-all-in-one-redis.yaml ```
-Replace *microsoft* with your ACR login server name. The image name is found on line 60 of the manifest file. The following example shows the default image name:
+Replace *mcr.microsoft.com* with your ACR login server name. The image name is found on line 60 of the manifest file. The following example shows the default image name:
```yaml containers:
Advance to the next tutorial to learn how to scale a Kubernetes application and
[kubernetes-service]: concepts-network.md#services [azure-powershell-install]: /powershell/azure/install-az-ps [get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry
-[gitops-flux-tutorial]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md
-[gitops-flux-tutorial-aks]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md#for-azure-kubernetes-service-clusters
+[gitops-flux-tutorial]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md?toc=/azure/aks/toc.json
+[gitops-flux-tutorial-aks]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md?toc=/azure/aks/toc.json#for-azure-kubernetes-service-clusters
api-management Api Management Dapr Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-dapr-policies.md
template:
dapr.io/app-id: "app-name" ```
+> [!TIP]
+> You can also deploy the [self-hosted gateway with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) and use the Dapr configuration options.
## Distributed Application Runtime (Dapr) integration policies
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
API Management **Premium** tier also supports [zone redundancy](zone-redundancy.
[api-management-aad-resources]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-aad-resources.png [api-management-arm-token]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-arm-token.png [api-management-endpoint]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-endpoint.png
-[control-plane-ip-address]: api-management-using-with-vnet.md#control-plane-ip-addresses
+[control-plane-ip-address]: virtual-network-reference.md#control-plane-ip-addresses
[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-ip-addresses.md
Previously updated : 04/13/2021 Last updated : 12/21/2021
In [multi-regional deployments](api-management-howto-deploy-multi-region.md), ea
## IP addresses of API Management service in VNet
-If your API Management service is inside a virtual network, it will have two types of IP addresses - public and private.
+If your API Management service is inside a virtual network, it will have two types of IP addresses: public and private.
-Public IP addresses are used for internal communication on port `3443` - for managing configuration (for example, through Azure Resource Manager). In the external VNet configuration, they are also used for runtime API traffic.
+* Public IP addresses are used for internal communication on port `3443` - for managing configuration (for example, through Azure Resource Manager). In the external VNet configuration, they are also used for runtime API traffic.
-Private virtual IP (VIP) addresses, available **only** in the [internal VNet mode](api-management-using-with-internal-vnet.md), are used to connect from within the network to API Management endpoints - gateways, the developer portal, and the management plane for direct API access. You can use them for setting up DNS records within the network.
+* Private virtual IP (VIP) addresses, available **only** in the [internal VNet mode](api-management-using-with-internal-vnet.md), are used to connect from within the network to API Management endpoints - gateways, the developer portal, and the management plane for direct API access. You can use them for setting up DNS records within the network.
You will see addresses of both types in the Azure portal and in the response of the API call:
GET https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/
API Management uses a public IP address for connections outside the VNet and a private IP address for connections within the VNet.
-When API management is deployed in the [internal VNet configuration](api-management-using-with-internal-vnet.md) and API management connects to private (intranet-facing) backends, internal IP addresses from the subnet are used for the runtime API traffic. When a request is sent from API Management to a private backend, a private IP address will be visible as the origin of the request. Therefore in this configuration, if a requirement exists to restrict traffic between API Management and an internal backend, it is better to use the whole API Management subnet prefix with an IP rule and not just the private IP address associated with the API Management resource.
+When API management is deployed in the [internal VNet configuration](api-management-using-with-internal-vnet.md) and API management connects to private (intranet-facing) backends, internal IP addresses (dynamic IP, or DIP addresses) from the subnet are used for the runtime API traffic. When a request is sent from API Management to a private backend, a private IP address will be visible as the origin of the request. Therefore in this configuration, if IP restriction lists secure resources within the VNet, it is recommended to use the whole API Management [subnet range](virtual-network-concepts.md#subnet-size) with an IP rule and not just the private IP address associated with the API Management resource.
When a request is sent from API Management to a public-facing (internet-facing) backend, a public IP address will always be visible as the origin of the request.
For traffic restriction purposes, you can use the range of IP addresses of Azure
## Changes to the IP addresses
-In the Developer, Basic, Standard, and Premium tiers of API Management, the public IP addresses (VIP) are static for the lifetime of a service, with the following exceptions:
+In the Developer, Basic, Standard, and Premium tiers of API Management, the public IP addresses (VIP) and private IP addresses (if configured in the internal VNet mode) are static for the lifetime of a service, with the following exceptions:
* The service is deleted and then re-created. * The service subscription is [suspended](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) or [warned](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) (for example, for nonpayment) and then reinstated.
api-management Api Management Howto Provision Self Hosted Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-provision-self-hosted-gateway.md
Now the gateway resource has been provisioned in your API Management instance. Y
## Next steps * To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md)
-* Learn more about how to [Deploy a self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
- Learn more about how to [Deploy a self-hosted gateway to an Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
+* Learn more about how to deploy a self-hosted gateway to Kubernetes using a [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) or [with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md)
* Learn more about how to [Deploy a self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
Previously updated : 08/10/2021 Last updated : 01/03/2022 # Connect to a virtual network in internal mode using Azure API Management
-With Azure virtual networks (VNETs), Azure API Management can manage internet-inaccessible APIs using several VPN technologies to make the connection. You can deploy API Management either via [external](./api-management-using-with-vnet.md) or internal modes. For VNET connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
+With Azure virtual networks (VNets), Azure API Management can manage internet-inaccessible APIs using several VPN technologies to make the connection. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
-In this article, you'll learn how to deploy API Management in internal VNET mode. In this mode, you can only view the following service endpoints within a VNET whose access you control.
+This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode, In this mode, you can only access the following service endpoints within a VNet whose access you control.
* The API gateway * The developer portal * Direct management * Git > [!NOTE]
-> None of the service endpoints are registered on the public DNS. The service endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNET.
+> None of the service endpoints are registered on the public DNS. The service endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
Use API Management in internal mode to:
Use API Management in internal mode to:
* Enable hybrid cloud scenarios by exposing your cloud-based APIs and on-premises APIs through a common gateway. * Manage your APIs hosted in multiple geographic locations, using a single gateway endpoint. +
+For configurations specific to the *external* mode, where the service endpoints are accessible from the public internet, and backend services are located in the network, see [Connect to a virtual network using Azure API Management](api-management-using-with-vnet.md).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] [!INCLUDE [premium-dev.md](../../includes/api-management-availability-premium-dev.md)]
-## Prerequisites
-
-Some prerequisites differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) for your API Management instance.
-
-> [!TIP]
-> When you use the portal to create or update the network connection of an existing API Management instance, the instance is hosted on the `stv2` compute platform.
-
-### [stv2](#tab/stv2)
-
-+ **An API Management instance.** For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
-
-* **A virtual network and subnet** in the same region and subscription as your API Management instance. The subnet may contain other Azure resources.
-
-* **A network security group** attached to the subnet above. A network security group (NSG) is required to explicitly allow inbound connectivity, because the load balancer used internally by API Management is secure by default and rejects all inbound traffic.
--
- > [!NOTE]
- > When you deploy an API Management service in an internal virtual network on the `stv2` platform, it's hosted behind an internal load balancer in the [Standard SKU](../load-balancer/skus.md), using the public IP address resource.
-
-### [stv1](#tab/stv1)
-
-+ **An API Management instance.** For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
-
-* **A virtual network and subnet** in the same region and subscription as your API Management instance.
-
- The subnet must be dedicated to API Management instances. Attempting to deploy an Azure API Management instance to a Resource Manager VNET subnet that contains other resources will cause the deployment to fail.
-
- > [!NOTE]
- > When you deploy an API Management service in an internal virtual network on the `stv1` platform, it's hosted behind an internal load balancer in the [Basic SKU](../load-balancer/skus.md).
--
-## Enable VNET connection
+## Enable VNet connection
-### Enable VNET connectivity using the Azure portal (`stv2` platform)
+### Enable VNet connectivity using the Azure portal (`stv2` platform)
1. Go to the [Azure portal](https://portal.azure.com) to find your API management instance. Search for and select **API Management services**. 1. Choose your API Management instance.
Some prerequisites differ depending on the version (`stv2` or `stv1`) of the [co
1. In the list of locations (regions) where your API Management service is provisioned: 1. Choose a **Location**. 1. Select **Virtual network**, **Subnet**, and **IP address**.
- * The VNET list is populated with Resource Manager VNETs available in your Azure subscriptions, set up in the region you are configuring.
-1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new VNET and subnet choices.
- :::image type="content" source="media/api-management-using-with-internal-vnet/api-management-using-with-internal-vnet.png" alt-text="Set up internal VNET in Azure portal":::
-1. Continue configuring VNET settings for the remaining locations of your API Management instance.
+ * The VNet list is populated with Resource Manager VNets available in your Azure subscriptions, set up in the region you are configuring.
+1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new VNet and subnet choices.
+ :::image type="content" source="media/api-management-using-with-internal-vnet/api-management-using-with-internal-vnet.png" alt-text="Set up internal VNet in Azure portal":::
+1. Continue configuring VNet settings for the remaining locations of your API Management instance.
1. In the top navigation bar, select **Save**, then select **Apply network configuration**. It can take 15 to 45 minutes to update the API Management instance.
After successful deployment, you should see your API Management service's **priv
:::image type="content" source="media/api-management-using-with-internal-vnet/api-management-internal-vnet-dashboard.png" alt-text="Public and private IP addressed in Azure portal"::: > [!NOTE]
-> Since the gateway URL is not registered on the public DNS, the test console available on the Azure portal will not work for an **Internal** VNET deployed service. Instead, use the test console provided on the **Developer portal**.
-
-### Enable connectivity using a Resource Manager template
+> Since the gateway URL is not registered on the public DNS, the test console available on the Azure portal will not work for an **internal** VNet deployed service. Instead, use the test console provided on the **developer portal**.
+
+### Enable connectivity using a Resource Manager template (`stv2` platform)
-#### API version 2021-01-01-preview (`stv2` platform)
-
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-internal-vnet-publicip)
+* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-internal-vnet-publicip) (API version 2021-01-01-preview )
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-internal-vnet-publicip%2Fazuredeploy.json)
-#### API version 2020-12-01 (`stv1` platform)
-
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-internal-vnet)
-
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-internal-vnet%2Fazuredeploy.json)
- ### Enable connectivity using Azure PowerShell cmdlets (`stv1` platform)
-[Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a VNET.
+[Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a VNet.
+ ## DNS configuration
-In external VNET mode, Azure manages the DNS. For internal VNET mode, you have to manage your own DNS to enable inbound access to your API Management service endpoints.
+In internal VNet mode, you have to manage your own DNS to enable inbound access to your API Management service endpoints.
We recommend: 1. Configure an Azure [DNS private zone](../dns/private-dns-overview.md).
-1. Link the Azure DNS private zone to the VNET into which you've deployed your API Management service.
+1. Link the Azure DNS private zone to the VNet into which you've deployed your API Management service.
Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-portal.md).
When you create an API Management service (`contosointernalvnet`, for example),
| Direct management endpoint | `contosointernalvnet.management.azure-api.net` | | Git | `contosointernalvnet.scm.azure-api.net` |
-To access these API Management service endpoints, you can create a virtual machine in a subnet connected to the VNET in which API Management is deployed. Assuming the [private virtual IP address](#routing) for your service is 10.1.0.5, you can map the hosts file as follows. On Windows, this file is at `%SystemDrive%\drivers\etc\hosts`.
+To access these API Management service endpoints, you can create a virtual machine in a subnet connected to the VNet in which API Management is deployed. Assuming the [private virtual IP address](#routing) for your service is 10.1.0.5, you can map the hosts file as follows. The hosts mapping file is at `%SystemDrive%\drivers\etc\hosts` (Windows) or `/etc/hosts` (Linux, macOS).
| Internal virtual IP address | Endpoint configuration | | -- | -- |
If you don't want to access the API Management service with the default host nam
:::image type="content" source="media/api-management-using-with-internal-vnet/api-management-custom-domain-name.png" alt-text="Set up custom domain name":::
-2. Create records in your DNS server to access the endpoints accessible from within your VNET. Map the endpoint records to the [private virtual IP address](#routing) for your service.
+2. Create records in your DNS server to access the endpoints accessible from within your VNet. Map the endpoint records to the [private virtual IP address](#routing) for your service.
## Routing
-The following virtual IP addresses are configured for an API Management instance in an internal virtual network. Learn more about the [IP addresses of API Management](api-management-howto-ip-addresses.md).
+The following virtual IP addresses are configured for an API Management instance in an internal virtual network.
| Virtual IP address | Description | | -- | -- |
-| **Private virtual IP address** | A load balanced IP address from within the API Management instance's subnet range (DIP), over which you can access the API gateway, developer portal, management, and Git endpoints.<br/><br/>Register this address with the DNS servers used by the VNET. |
+| **Private virtual IP address** | A load balanced IP address from within the API Management instance's subnet range (DIP), over which you can access the API gateway, developer portal, management, and Git endpoints.<br/><br/>Register this address with the DNS servers used by the VNet. |
| **Public virtual IP address** | Used *only* for control plane traffic to the management endpoint over port 3443. Can be locked down to the [ApiManagement][ServiceTags] service tag. | The load-balanced public and private IP addresses can be found on the **Overview** blade in the Azure portal.
+For more information and considerations, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#ip-addresses-of-api-management-service-in-vnet).
+
+The load-balanced public and private IP addresses can be found on the **Overview** blade in the Azure portal.
+ > [!NOTE] > The VIP address(es) of the API Management instance will change when:
-> * The VNET is enabled or disabled.
+> * The VNet is enabled or disabled.
> * API Management is moved from **External** to **Internal** virtual network mode, or vice versa. > * [Zone redundancy](zone-redundancy.md) settings are enabled, updated, or disabled in a location for your instance (Premium SKU only). >
-> You may need to update DNS registrations, routing rules, and IP restriction lists within the VNET.
+> You may need to update DNS registrations, routing rules, and IP restriction lists within the VNet.
### VIP and DIP addresses
-Dynamic IP (DIP) addresses will be assigned to each underlying virtual machine in the service and used to access resources *within* the VNET. The API Management service's public virtual IP (VIP) address will be used to access resources *outside* the VNET. If IP restriction lists secure resources within the VNET, you must specify the entire subnet range where the API Management service is deployed to grant or restrict access from the service.
+Dynamic IP (DIP) addresses will be assigned to each underlying virtual machine in the service and used to access resources *within* the VNet. The API Management service's public virtual IP (VIP) address will be used to access resources *outside* the VNet. If IP restriction lists secure resources within the VNet, you must specify the entire subnet range where the API Management service is deployed to grant or restrict access from the service.
Learn more about the [recommended subnet size](virtual-network-concepts.md#subnet-size). #### Example
-if you deploy 1 [capacity unit](api-management-capacity.md) of API Management in the Premium tier in an internal VNET, 3 IP addresses will be used: 1 for the private VIP and one each for the DIPs for two VMs. If you scale out to 4 units, more IPs will be consumed for additional DIPs from the subnet.
+if you deploy 1 [capacity unit](api-management-capacity.md) of API Management in the Premium tier in an internal VNet, 3 IP addresses will be used: 1 for the private VIP and one each for the DIPs for two VMs. If you scale out to 4 units, more IPs will be consumed for additional DIPs from the subnet.
If the destination endpoint has allow-listed only a fixed set of DIPs, connection failures will result if you add new units in the future. For this reason and since the subnet is entirely in your control, we recommend allow-listing the entire subnet in the backend.
+## <a name="network-configuration-issues"> </a>Common network configuration issues
+
+This section has moved. See [Virtual network configuration reference](virtual-network-reference.md).
+++ ## Next steps Learn more about:
-* [Network configuration when setting up Azure API Management in a VNET][Common network configuration problems]
-* [VNET FAQs](../virtual-network/virtual-networks-faq.md)
+* [Virtual network configuration reference](virtual-network-reference.md)
+* [VNet FAQs](../virtual-network/virtual-networks-faq.md)
* [Creating a record in DNS](/previous-versions/windows/it-pro/windows-2000-server/bb727018(v=technet.10)) [api-management-using-internal-vnet-menu]: ./media/api-management-using-with-internal-vnet/updated-api-management-using-with-internal-vnet.png
Learn more about:
[api-management-custom-domain-name]: ./media/api-management-using-with-internal-vnet/updated-api-management-custom-domain-name.png [Create API Management service]: get-started-create-service-instance.md
-[Common network configuration problems]: api-management-using-with-vnet.md#network-configuration-issues
+
+[Common network configuration problems]: virtual-network-reference.md
[ServiceTags]: ../virtual-network/network-security-groups-overview.md#service-tags
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
Title: Connect to a virtual network using Azure API Management
-description: Learn how to set up a connection to a virtual network in Azure API Management and access web services through it.
+description: Learn how to set up a connection to a virtual network in Azure API Management and access API backends through it.
Previously updated : 08/10/2021 Last updated : 01/03/2022 -+ # Connect to a virtual network using Azure API Management
-Azure API Management can be deployed inside an Azure virtual network (VNET) to access backend services within the network. For VNET connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
+Azure API Management can be deployed inside an Azure virtual network (VNet) to access backend services within the network. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
-This article explains how to set up VNET connectivity for your API Management instance in the *external* mode, where the developer portal, API gateway, and other API Management endpoints are accessible from the public internet. For configurations specific to the *internal* mode, where the endpoints are accessible only within the VNET, see [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md).
+This article explains how to set up VNet connectivity for your API Management instance in the *external* mode, where the developer portal, API gateway, and other API Management endpoints are accessible from the public internet, and backend services are located in the network.
+
+For configurations specific to the *internal* mode, where the endpoints are accessible only within the VNet, see [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] [!INCLUDE [premium-dev.md](../../includes/api-management-availability-premium-dev.md)]
-## Prerequisites
-
-Some prerequisites differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-
-> [!TIP]
-> When you use the portal to create or update the network connection of an existing API Management instance, the instance is hosted on the `stv2` compute platform.
-
-### [stv2](#tab/stv2)
-
-+ **An API Management instance.** For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
-
-* **A virtual network and subnet** in the same region and subscription as your API Management instance. The subnet may contain other Azure resources.
-
-* **A network security group** attached to the subnet above. A network security group (NSG) is required to explicitly allow inbound connectivity, because the load balancer used internally by API Management is secure by default and rejects all inbound traffic. For more strict configuration refer to **Required ports** below.
--
-### [stv1](#tab/stv1)
-
-+ **An API Management instance.** For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
-
-* **A virtual network and subnet** in the same region and subscription as your API Management instance.
- The subnet must be dedicated to API Management instances. Attempting to deploy an Azure API Management instance to a Resource Manager VNET subnet that contains other resources will cause the deployment to fail.
+## Enable VNet connection
--
-## Enable VNET connection
-
-### Enable VNET connectivity using the Azure portal (`stv2` compute platform)
+### Enable VNet connectivity using the Azure portal (`stv2` compute platform)
1. Go to the [Azure portal](https://portal.azure.com) to find your API management instance. Search for and select **API Management services**. 1. Choose your API Management instance.- 1. Select **Virtual network**. 1. Select the **External** access type.
- :::image type="content" source="media/api-management-using-with-vnet/api-management-menu-vnet.png" alt-text="Select VNET in Azure portal.":::
+ :::image type="content" source="media/api-management-using-with-vnet/api-management-menu-vnet.png" alt-text="Select VNet in Azure portal.":::
1. In the list of locations (regions) where your API Management service is provisioned: 1. Choose a **Location**. 1. Select **Virtual network**, **Subnet**, and **IP address**.
- * The VNET list is populated with Resource Manager VNETs available in your Azure subscriptions, set up in the region you are configuring.
+ * The VNet list is populated with Resource Manager VNets available in your Azure subscriptions, set up in the region you are configuring.
- :::image type="content" source="media/api-management-using-with-vnet/api-management-using-vnet-select.png" alt-text="VNET settings in the portal.":::
+ :::image type="content" source="media/api-management-using-with-vnet/api-management-using-vnet-select.png" alt-text="VNet settings in the portal.":::
-1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new VNET and subnet choices.
+1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new VNet and subnet choices.
-1. Continue configuring VNET settings for the remaining locations of your API Management instance.
+1. Continue configuring VNet settings for the remaining locations of your API Management instance.
7. In the top navigation bar, select **Save**, then select **Apply network configuration**. It can take 15 to 45 minutes to update the API Management instance.
-### Enable connectivity using a Resource Manager template
-
-Use the following templates to deploy an API Management instance and connect to a VNET. The templates differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-
-### [stv2](#tab/stv2)
+### Enable connectivity using a Resource Manager template (`stv2` compute platform)
-#### API version 2021-01-01-preview
-
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet-publicip)
+* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet-publicip) (API version 2021-01-01-preview)
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-external-vnet-publicip%2Fazuredeploy.json)
-### [stv1](#tab/stv1)
-
-#### API version 2020-12-01
-
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet)
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-external-vnet%2Fazuredeploy.json)
+### Enable connectivity using Azure PowerShell cmdlets (`stv1` platform)
-### Enable connectivity using Azure PowerShell cmdlets
+[Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a VNet.
-[Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a VNET.
-- ## Connect to a web service hosted within a virtual network
-Once you've connected your API Management service to the VNET, you can access backend services within it just as you do public services. When creating or editing an API, type the local IP address or the host name (if a DNS server is configured for the VNET) of your web service into the **Web service URL** field.
+Once you've connected your API Management service to the VNet, you can access backend services within it just as you do public services. When creating or editing an API, type the local IP address or the host name (if a DNS server is configured for the VNet) of your web service into the **Web service URL** field.
-## <a name="network-configuration-issues"> </a>Common Network Configuration Issues
+## Custom DNS server setup
+In external VNet mode, Azure manages the DNS by default. You can optionally configure a custom DNS server.
-Review the following sections for more network configuration settings.
-
-These settings address common misconfiguration issues that can occur while deploying API Management service into a VNET.
-
-### Custom DNS server setup
-In external VNET mode, Azure manages the DNS by default. You can optionally configure a custom DNS server. The API Management service depends on several Azure services. When API Management is hosted in a VNET with a custom DNS server, it needs to resolve the hostnames of those Azure services.
+The API Management service depends on several Azure services. When API Management is hosted in a VNet with a custom DNS server, it needs to resolve the hostnames of those Azure services.
* For guidance on custom DNS setup, including forwarding for Azure-provided hostnames, see [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-* For reference, see the [required ports](#required-ports) and network requirements.
-
-> [!IMPORTANT]
-> If you plan to use a custom DNS server(s) for the VNET, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates).
-
-### Required ports
-
-You can control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security group][NetworkSecurityGroups] rules. If certain ports are unavailable, API Management may not operate properly and may become inaccessible.
-
-When an API Management service instance is hosted in a VNET, the ports in the following table are used. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-
->[!IMPORTANT]
-> **Bold** items in the *Purpose* column indicate port configurations required for successful deployment and operation of the API Management service. Configurations labeled "optional" are only needed to enable specific features, as noted. They are not required for the overall health of the service.
-
-#### [stv2](#tab/stv2)
-
-| Source / Destination Port(s) | Direction | Transport protocol | [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) <br> Source / Destination | Purpose (\*) | VNET type |
-||--|--||-|-|
-| * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | Client communication to API Management (optional) | External |
-| * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | Management endpoint for Azure portal and PowerShell (optional) | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
-| * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureKeyVault | **Access to Azure Key Vault** | External & Internal |
-| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / Event Hub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional) | External & Internal |
-| * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
-| * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension (optional) | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
-| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
-| * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
-| * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
-| * / 6390 | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | **Azure Infrastructure Load Balancer** | External & Internal |
-
-#### [stv1](#tab/stv1)
-
-| Source / Destination Port(s) | Direction | Transport protocol | [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) <br> Source / Destination | Purpose (\*) | VNET type |
-||--|--||-|-|
-| * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | Client communication to API Management (optional) | External |
-| * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | Management endpoint for Azure portal and PowerShell (optional) | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal |
-| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) dependency (optional) | External & Internal |
-| * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
-| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / Event Hub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal |
-| * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
-| * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension & Dependency on Event Grid (if events notification activated) (optional) | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
-| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
-| * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
-| * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
-| * / * | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | **Azure Infrastructure Load Balancer** (required for Premium SKU, optional for other SKUs) | External & Internal |
---
-### TLS functionality
- To enable TLS/SSL certificate chain building and validation, the API Management service needs outbound network connectivity to `ocsp.msocsp.com`, `mscrl.microsoft.com`, and `crl.microsoft.com`. This dependency is not required if any certificate you upload to API Management contains the full chain to the CA root.
-
-### DNS access
- Outbound access on `port 53` is required for communication with DNS servers. If a custom DNS server exists on the other end of a VPN gateway, the DNS server must be reachable from the subnet hosting API Management.
-
-### Metrics and health monitoring
-
-Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains, are represented under the AzureMonitor service tag for use with Network Security Groups.
-
-| Azure Environment | Endpoints |
- |-||
-| Azure Public | <ul><li>gcs.prod.monitoring.core.windows.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.com</li></ul> |
-| Azure Government | <ul><li>fairfax.warmpath.usgovcloudapi.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>prod5.prod.microsoftmetrics.com</li><li>prod5-black.prod.microsoftmetrics.com</li><li>prod5-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.us</li></ul> |
-| Azure China 21Vianet | <ul><li>mooncake.warmpath.chinacloudapi.cn</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>prod5.prod.microsoftmetrics.com</li><li>prod5-black.prod.microsoftmetrics.com</li><li>prod5-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.cn</li></ul> |
-
-### Regional service tags
-
-NSG rules allowing outbound connectivity to Storage, SQL, and Event Hubs service tags may use the regional versions of those tags corresponding to the region containing the API Management instance (for example, Storage.WestUS for an API Management instance in the West US region). In multi-region deployments, the NSG in each region should allow traffic to the service tags for that region and the primary region.
+* Outbound network access on port `53` is required for communication with DNS servers. For more settings, see [Virtual network configuration reference](virtual-network-reference.md).
> [!IMPORTANT]
-> Enable publishing the [developer portal](api-management-howto-developer-portal.md) for an API Management instance in a VNET by allowing outbound connectivity to blob storage in the West US region. For example, use the **Storage.WestUS** service tag in an NSG rule. Currently, connectivity to blob storage in the West US region is required to publish the developer portal for any API Management instance.
-
-### SMTP relay
-
-Allow outbound network connectivity for the SMTP Relay, which resolves under the host `smtpi-co1.msn.com`, `smtpi-ch1.msn.com`, `smtpi-db3.msn.com`, `smtpi-sin.msn.com`, and `ies.global.microsoft.com`
-
-> [!NOTE]
-> Only the SMTP relay provided in API Management may be used to send email from your instance.
-
-### Developer portal CAPTCHA
-Allow outbound network connectivity for the developer portal's CAPTCHA, which resolves under the hosts `client.hip.live.com` and `partner.hip.live.com`.
-
-### Azure portal diagnostics
- When using the API Management extension from inside a VNET, outbound access to `dc.services.visualstudio.com` on `port 443` is required to enable the flow of diagnostic logs from Azure portal. This access helps in troubleshooting issues you might face when using extension.
-
-### Azure load balancer
- You're not required to allow inbound requests from service tag `AZURE_LOAD_BALANCER` for the `Developer` SKU, since only one compute unit is deployed behind it. But inbound from `AZURE_LOAD_BALANCER` becomes **critical** when scaling to a higher SKU, like `Premium`, as failure of health probe from load balancer then blocks all Inbound access to control plane and data plane.
+> If you plan to use a custom DNS server(s) for the VNet, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates).
-### Application Insights
- If you've enabled [Azure Application Insights](api-management-howto-app-insights.md) monitoring on API Management, allow outbound connectivity to the [telemetry endpoint](../azure-monitor/app/ip-addresses.md#outgoing-ports) from the VNET.
-
-### KMS endpoint
-
-When adding virtual machines running Windows to the VNET, allow outbound connectivity on port 1688 to the [KMS endpoint](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation#solution) in your cloud. This configuration routes Windows VM traffic to the Azure Key Management Services (KMS) server to complete Windows activation.
-
-### Force tunneling traffic to on-premises firewall Using ExpressRoute or Network Virtual Appliance
- Commonly, you configure and define your own default route (0.0.0.0/0), forcing all traffic from the API Management-delegated subnet to flow through an on-premises firewall or to a network virtual appliance. This traffic flow breaks connectivity with Azure API Management, since outbound traffic is either blocked on-premises, or NAT'd to an unrecognizable set of addresses no longer working with various Azure endpoints. You can solve this issue via a couple of methods:
+## Routing
- * Enable [service endpoints][ServiceEndpoints] on the subnet in which the API Management service is deployed for:
- * Azure SQL
- * Azure Storage
- * Azure Event Hub
- * Azure Key Vault (v2 platform)
-
- By enabling endpoints directly from API Management subnet to these services, you can use the Microsoft Azure backbone network, providing optimal routing for service traffic. If you use service endpoints with a force tunneled API Management, the above Azure services traffic isn't force tunneled. The other API Management service dependency traffic is force tunneled and can't be lost. If lost, the API Management service would not function properly.
++ A load-balanced public IP address (VIP) is reserved to provide access to all service endpoints and resources outside the VNet.
+ + The public VIP can be found on the **Overview/Essentials** blade in the Azure portal.
++ An IP address from a subnet IP range (DIP) is used to access resources within the VNet.
- * All the control plane traffic from the internet to the management endpoint of your API Management service is routed through a specific set of inbound IPs, hosted by API Management. When the traffic is force tunneled, the responses will not symmetrically map back to these inbound source IPs. To overcome the limitation, set the destination of the following user-defined routes ([UDRs][UDRs]) to the "Internet", to steer traffic back to Azure. Find the set of inbound IPs for control plane traffic documented in [Control Plane IP Addresses](#control-plane-ip-addresses).
+For more information and considerations, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#ip-addresses-of-api-management-service-in-vnet).
- * For other force tunneled API Management service dependencies, resolve the hostname and reach out to the endpoint. These include:
- - Metrics and Health Monitoring
- - Azure portal Diagnostics
- - SMTP Relay
- - Developer portal CAPTCHA
- - Azure KMS server
+## <a name="network-configuration-issues"> </a>Common network configuration issues
-## Routing
+This section has moved. See [Virtual network configuration reference](virtual-network-reference.md).
-+ A load-balanced public IP address (VIP) is reserved to provide access to all service endpoints and resources outside the VNET.
- + Load balanced public IP addresses can be found on the **Overview/Essentials** blade in the Azure portal.
-+ An IP address from a subnet IP range (DIP) is used to access resources within the VNET.
-
-> [!NOTE]
-> The VIP address(es) of the API Management instance will change when:
-> * The VNET is enabled or disabled.
-> * API Management is moved from **External** to **Internal** virtual network mode, or vice versa.
-> * [Zone redundancy](zone-redundancy.md) settings are enabled, updated, or disabled in a location for your instance (Premium SKU only).
-
-## Control plane IP addresses
-
-The following IP addresses are divided by **Azure Environment**. When allowing inbound requests, IP addresses marked with **Global** must be permitted, along with the **Region**-specific IP address. In some cases, two IP addresses are listed. Permit both IP addresses.
-
-| **Azure Environment**| **Region**| **IP address**|
-|--|-||
-| Azure Public| South Central US (Global)| 104.214.19.224|
-| Azure Public| North Central US (Global)| 52.162.110.80|
-| Azure Public| Australia Central| 20.37.52.67|
-| Azure Public| Australia Central 2| 20.39.99.81|
-| Azure Public| Australia East| 20.40.125.155|
-| Azure Public| Australia Southeast| 20.40.160.107|
-| Azure Public| Brazil South| 191.233.24.179, 191.238.73.14|
-| Azure Public| Brazil Southeast| 191.232.18.181|
-| Azure Public| Canada Central| 52.139.20.34, 20.48.201.76|
-| Azure Public| Canada East| 52.139.80.117|
-| Azure Public| Central India| 13.71.49.1, 20.192.45.112|
-| Azure Public| Central US| 13.86.102.66|
-| Azure Public| Central US EUAP| 52.253.159.160|
-| Azure Public| East Asia| 52.139.152.27|
-| Azure Public| East US| 52.224.186.99|
-| Azure Public| East US 2| 20.44.72.3|
-| Azure Public| East US 2 EUAP| 52.253.229.253|
-| Azure Public| France Central| 40.66.60.111|
-| Azure Public| France South| 20.39.80.2|
-| Azure Public| Germany North| 51.116.0.0|
-| Azure Public| Germany West Central| 51.116.96.0, 20.52.94.112|
-| Azure Public| Japan East| 52.140.238.179|
-| Azure Public| Japan West| 40.81.185.8|
-| Azure Public| India Central| 20.192.234.160|
-| Azure Public| India West| 20.193.202.160|
-| Azure Public| Korea Central| 40.82.157.167, 20.194.74.240|
-| Azure Public| Korea South| 40.80.232.185|
-| Azure Public| North Central US| 40.81.47.216|
-| Azure Public| North Europe| 52.142.95.35|
-| Azure Public| Norway East| 51.120.2.185|
-| Azure Public| Norway West| 51.120.130.134|
-| Azure Public| South Africa North| 102.133.130.197, 102.37.166.220|
-| Azure Public| South Africa West| 102.133.0.79|
-| Azure Public| South Central US| 20.188.77.119, 20.97.32.190|
-| Azure Public| South India| 20.44.33.246|
-| Azure Public| Southeast Asia| 40.90.185.46|
-| Azure Public| Switzerland North| 51.107.0.91|
-| Azure Public| Switzerland West| 51.107.96.8|
-| Azure Public| UAE Central| 20.37.81.41|
-| Azure Public| UAE North| 20.46.144.85|
-| Azure Public| UK South| 51.145.56.125|
-| Azure Public| UK West| 51.137.136.0|
-| Azure Public| West Central US| 52.253.135.58|
-| Azure Public| West Europe| 51.145.179.78|
-| Azure Public| West India| 40.81.89.24|
-| Azure Public| West US| 13.64.39.16|
-| Azure Public| West US 2| 51.143.127.203|
-| Azure Public| West US 3| 20.150.167.160|
-| Azure China 21Vianet| China North (Global)| 139.217.51.16|
-| Azure China 21Vianet| China East (Global)| 139.217.171.176|
-| Azure China 21Vianet| China North| 40.125.137.220|
-| Azure China 21Vianet| China East| 40.126.120.30|
-| Azure China 21Vianet| China North 2| 40.73.41.178|
-| Azure China 21Vianet| China East 2| 40.73.104.4|
-| Azure Government| USGov Virginia (Global)| 52.127.42.160|
-| Azure Government| USGov Texas (Global)| 52.127.34.192|
-| Azure Government| USGov Virginia| 52.227.222.92|
-| Azure Government| USGov Iowa| 13.73.72.21|
-| Azure Government| USGov Arizona| 52.244.32.39|
-| Azure Government| USGov Texas| 52.243.154.118|
-| Azure Government| USDoD Central| 52.182.32.132|
-| Azure Government| USDoD East| 52.181.32.192|
-
-## Troubleshooting
-* **Unsuccessful initial deployment of API Management service into a subnet**
- * Deploy a virtual machine into the same subnet.
- * Connect to the virtual machine and validate connectivity to one of each of the following resources in your Azure subscription:
- * Azure Storage blob
- * Azure SQL Database
- * Azure Storage Table
- * Azure Key Vault (for an API Management instance hosted on the [`stv2` platform](compute-infrastructure.md))
-
- > [!IMPORTANT]
- > After validating the connectivity, remove all the resources in the subnet before deploying API Management into the subnet (required when API Management is hosted on the `stv1` platform).
-
-* **Verify network connectivity status**
- * After deploying API Management into the subnet, use the portal to check the connectivity of your instance to dependencies, such as Azure Storage.
- * In the portal, in the left-hand menu, under **Deployment and infrastructure**, select **Network connectivity status**.
-
- :::image type="content" source="media/api-management-using-with-vnet/verify-network-connectivity-status.png" alt-text="Verify network connectivity status in the portal":::
-
- | Filter | Description |
- | -- | -- |
- | **Required** | Select to review the required Azure services connectivity for API Management. Failure indicates that the instance is unable to perform core operations to manage APIs. |
- | **Optional** | Select to review the optional services connectivity. Failure indicates only that the specific functionality will not work (for example, SMTP). Failure may lead to degradation in using and monitoring the API Management instance and providing the committed SLA. |
-
- To address connectivity issues, review [network configuration settings](#network-configuration-issues) and fix required network settings.
-
-* **Incremental updates**
- When making changes to your network, refer to [NetworkStatus API](/rest/api/apimanagement/current-ga/network-status) to verify that the API Management service has not lost access to critical resources. The connectivity status should be updated every 15 minutes.
-
-* **Resource navigation links**
- An APIM instance hosted on the [`stv1` compute platform](compute-infrastructure.md), when deployed into a Resource Manager VNET subnet, reserves the subnet by creating a resource navigation link. If the subnet already contains a resource from a different provider, deployment will **fail**. Similarly, when you delete an API Management service, or move it to a different subnet, the resource navigation link will be removed.
## Next steps Learn more about:
+* [Virtual network configuration reference](virtual-network-reference.md)
* [Connecting a virtual network to backend using VPN Gateway](../vpn-gateway/design.md#s2smulti) * [Connecting a virtual network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md) * [Debug your APIs using request tracing](api-management-howto-api-inspector.md)
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
# Deploy to Azure Kubernetes Service
-This article provides the steps for deploying self-hosted gateway component of Azure API Management to [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/). For deploying self-hosted gateway to a Kubernetes cluster, see the [how-to article](how-to-deploy-self-hosted-gateway-kubernetes.md).
+This article provides the steps for deploying self-hosted gateway component of Azure API Management to [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/). For deploying self-hosted gateway to a Kubernetes cluster, see the how-to article for deployment by using a [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) or [with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md).
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of Azure API Management to a Docker environment. > [!NOTE]
-> Hosting self-hosted gateway in Docker is best suited for evaluation and development use cases. Kubernetes is recommended for production use. See [this](how-to-deploy-self-hosted-gateway-kubernetes.md) document to learn how to deploy self-hosted gateway to Kubernetes.
+> Hosting self-hosted gateway in Docker is best suited for evaluation and development use cases. Kubernetes is recommended for production use. Learn how to [deploy with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) or using [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) to learn how to deploy self-hosted gateway to Kubernetes.
## Prerequisites
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
+
+ Title: Deploy a self-hosted gateway to Kubernetes with Helm
+description: Learn how to deploy self-hosted gateway component of Azure API Management to Kubernetes with Helm
++++ Last updated : 12/21/2021+++
+# Deploy to Kubernetes with Helm
+
+[Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. It allows you to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources.
+
+This article provides the steps for deploying self-hosted gateway component of Azure API Management to a Kubernetes cluster by using Helm.
+
+> [!NOTE]
+> You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md).
+
+## Prerequisites
+
+- Create a Kubernetes cluster, or have access to an existing one.
+ > [!TIP]
+ > [Single-node clusters](https://kubernetes.io/docs/setup/#learning-environment) work well for development and evaluation purposes. Use [Kubernetes Certified](https://kubernetes.io/partners/#conformance) multi-node clusters on-premises or in the cloud for production workloads.
+* [Create an Azure API Management instance](get-started-create-service-instance.md).
+* [Provision a gateway resource in your API Management instance](api-management-howto-provision-self-hosted-gateway.md).
+* [Install Helm v3][helm-install].
+
+## Adding the Helm repository
+
+1. Add Azure API Management as a new Helm repository.
+
+ ```console
+ helm repo add azure-apim-gateway https://azure.github.io/api-management-self-hosted-gateway/helm-charts/
+ ```
+
+2. Update repo to fetch the latest Helm charts.
+
+ ```console
+ helm repo update
+ ```
+
+3. Verify your Helm configuration by listing all available charts.
+
+ ```console
+ $ helm search repo azure-apim-gateway
+ NAME CHART VERSION APP VERSION DESCRIPTION
+ azure-apim-gateway/azure-api-management-gateway 0.3.0 1.1.2 A Helm chart to deploy an Azure API Management ...
+ ```
+
+## Deploy the self-hosted gateway to Kubernetes
+
+1. Select **Gateways** from under **Deployment and infrastructure**.
+2. Select the self-hosted gateway resource you intend to deploy.
+3. Select **Deployment**.
+4. A new token in the **Token** text box was autogenerated for you using the default **Expiry** and **Secret Key** values. Adjust either or both if desired and select **Generate** to create a new token.
+5. Take note of your **Token** and **Configuration URL**
+6. Install the self-hosted gateway by using the Helm chart
+
+ ```console
+ helm install azure-api-management-gateway \
+ --set gateway.endpoint='<your token>' \
+ --set gateway.authKey='<your configuration url>' \
+ azure-apim-gateway/azure-api-management-gateway
+ ```
+
+7. Execute the command. The command instructs your Kubernetes cluster to:
+ * Download the image of the self-hosted gateway from the Microsoft Container Registry and run it as a container.
+ * Configure the container to expose HTTP (8080) and HTTPS (8081) ports.
+
+ > [!IMPORTANT]
+ > By default, the gateway is using a ClusterIP service and is only exposed inside the cluster.
+ > You can change this by specifying the type of Kubernetes service during installation.
+ >
+ > For example, you can expose it through a load balancer by adding `--set service.type=LoadBalancer`
+
+8. Run the following command to check the gateway pod is running. Your pod name will be different.
+
+ ```console
+ kubectl get pods
+ NAME READY STATUS RESTARTS AGE
+ azure-api-management-gateway-59f5fb94c-s9stz 1/1 Running 0 1m
+ ```
+
+9. Run the following command to check the gateway service is running. Your service name and IP addresses will be different.
+
+ ```console
+ kubectl get services
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ azure-api-management-gateway ClusterIP 10.0.229.55 <none> 8080/TCP,8081/TCP 1m
+ ```
+
+10. Return to the Azure portal and confirm that gateway node you deployed is reporting healthy status.
+
+> [!TIP]
+> Use `kubectl logs <gateway-pod-name>` command to view a snapshot of self-hosted gateway log.
+
+## Next steps
+
+* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md).
+* Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn [how to configure and persist logs in the cloud](how-to-configure-cloud-metrics-logs.md).
+* Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md).
+
+[helm]: https://helm.sh/
+[helm-install]: https://helm.sh/docs/intro/install/
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
Title: Deploy a self-hosted gateway to Kubernetes
-description: Learn how to deploy a self-hosted gateway component of Azure API Management to Kubernetes
+ Title: Deploy a self-hosted gateway to Kubernetes with YAML
+description: Learn how to deploy a self-hosted gateway component of Azure API Management to Kubernetes with YAML
Last updated 05/25/2021
-# Deploy a self-hosted gateway to Kubernetes
+# Deploy a self-hosted gateway to Kubernetes with YAML
This article describes the steps for deploying the self-hosted gateway component of Azure API Management to a Kubernetes cluster.
This article describes the steps for deploying the self-hosted gateway component
## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).-- Create a Kubernetes cluster.
+- Create a Kubernetes cluster, or have access to an existing one.
> [!TIP] > [Single-node clusters](https://kubernetes.io/docs/setup/#learning-environment) work well for development and evaluation purposes. Use [Kubernetes Certified](https://kubernetes.io/partners/#conformance) multi-node clusters on-premises or in the cloud for production workloads. - [Provision a self-hosted gateway resource in your API Management instance](api-management-howto-provision-self-hosted-gateway.md).
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/self-hosted-gateway-overview.md
When connectivity is restored, each self-hosted gateway affected by the outage w
- [Read a whitepaper for additional background on this topic](https://aka.ms/hybrid-and-multi-cloud-api-management) - [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)-- [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- [Deploy self-hosted gateway to Kubernetes with YAML](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- [Deploy self-hosted gateway to Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md)
- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/virtual-network-concepts.md
# Use a virtual network with Azure API Management
-With Azure Virtual Networks (VNETs), you can place your Azure resources in a non-internet-routable network to which you control access. You can then connect VNETs to your on-premises networks using various VPN technologies. To learn more about Azure VNETs, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
+With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. You can then connect VNets to your on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
-This article explains VNET connectivity options, requirements, and considerations for your API Management instance. You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the deployment. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups][NetworkSecurityGroups].
+This article explains VNet connectivity options, requirements, and considerations for your API Management instance. You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the configuration. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups][NetworkSecurityGroups].
For detailed deployment steps and network configuration, see:
For detailed deployment steps and network configuration, see:
## Access options
-When created, an API Management instance must be accessible from the internet. Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the VNET (internal mode).
+When created, an API Management instance must be accessible from the internet. Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the VNet (internal mode).
-* **External** - The API Management endpoints are accessible from the public internet via an external load balancer. The gateway can access resources within the VNET.
+* **External** - The API Management endpoints are accessible from the public internet via an external load balancer. The gateway can access resources within the VNet.
- :::image type="content" source="media/virtual-network-concepts/api-management-vnet-external.png" alt-text="Connect to external VNET":::
+ :::image type="content" source="media/virtual-network-concepts/api-management-vnet-external.png" alt-text="Connect to external VNet":::
Use API Management in external mode to access backend services deployed in the virtual network.
-* **Internal** - The API Management endpoints are accessible only from within the VNET via an internal load balancer. The gateway can access resources within the VNET.
+* **Internal** - The API Management endpoints are accessible only from within the VNet via an internal load balancer. The gateway can access resources within the VNet.
- :::image type="content" source="media/virtual-network-concepts/api-management-vnet-internal.png" alt-text="Connect to internal VNET":::
+ :::image type="content" source="media/virtual-network-concepts/api-management-vnet-internal.png" alt-text="Connect to internal VNet":::
Use API Management in internal mode to:
The following are virtual network resource requirements for API Management. Some
* The subnet used to connect to the API Management instance may contain other Azure resource types. * A [network security group](../virtual-network/network-security-groups-overview.md) attached to the subnet above. A network security group (NSG) is required to explicitly allow inbound connectivity, because the load balancer used internally by API Management is secure by default and rejects all inbound traffic. * The API Management service, virtual network and subnet, and public IP address resource must be in the same region and subscription.
-* For multi-region API Management deployments, you configure virtual network resources separately for each location.
+* For multi-region API Management deployments, configure virtual network resources separately for each location.
### [stv1](#tab/stv1)
The following are virtual network resource requirements for API Management. Some
## Subnet size
-The minimum size of the subnet in which API Management can be deployed is /29, which gives three usable IP addresses. Each extra scale unit of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
+The minimum size of the subnet in which API Management can be deployed is /29, which gives three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
* Azure reserves some IP addresses within each subnet that can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
-* In addition to the IP addresses used by the Azure VNET infrastructure, each API Management instance in the subnet uses:
+* In addition to the IP addresses used by the Azure VNet infrastructure, each API Management instance in the subnet uses:
* Two IP addresses per unit of Premium SKU, or * One IP address for the Developer SKU.
-* Each instance reserves an extra IP address for the external load balancer. When deploying into an [internal VNET](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
+* Each instance reserves an extra IP address for the external load balancer. When deploying into an [internal VNet](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
## Routing
-See the Routing guidance when deploying your API Management instance into an [external VNET](./api-management-using-with-vnet.md#routing) or [internal VNET](./api-management-using-with-internal-vnet.md#routing).
+See the Routing guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing).
Learn more about the [IP addresses of API Management](api-management-howto-ip-addresses.md). ## DNS
-In external mode, the VNET enables [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) for your API Management endpoints and other Azure resources. It does not provide name resolution for on-premises resources.
+* In external mode, the VNet enables [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) by default for your API Management endpoints and other Azure resources. It does not provide name resolution for on-premises resources. Optionally, configure your own DNS solution.
+
+* In internal mode, you must provide your own DNS solution to ensure name resolution for API Management endpoints and other required Azure resources. We recommend configuring an Azure [private DNS zone](../dns/private-dns-overview.md).
-In internal mode, you must provide your own DNS solution to ensure name resolution for API Management endpoints and other required Azure resources. We recommend configuring an Azure [private DNS zone](../dns/private-dns-overview.md).
+For more information, see the DNS guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing).
For more information, see: * [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). * [Create an Azure private DNS zone](../dns/private-dns-getstarted-portal.md) > [!IMPORTANT]
-> If you plan to use a custom DNS solution for the VNET, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates).
+> If you plan to use a custom DNS solution for the VNet, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates), or by selecting **Apply network configuration** in the service instance's network configuration window in the Azure portal.
## Limitations
Some limitations differ depending on the version (`stv2` or `stv1`) of the [comp
### [stv2](#tab/stv2) * A subnet containing API Management instances can't be moved across subscriptions.
-* For multi-region API Management deployments configured in internal VNET mode, users own the routing and are responsible for managing the load balancing across multiple regions.
+* For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions.
* To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address. ### [stv1](#tab/stv1) * A subnet containing API Management instances can't be movacross subscriptions.
-* For multi-region API Management deployments configured in internal VNET mode, users own the routing and are responsible for managing the load balancing across multiple regions.
+* For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions.
* To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
-* Due to platform limitations, connectivity between a resource in a globally peered VNET in another region and an API Management service in internal mode will not work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
+* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode will not work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/virtual-network-reference.md
+
+ Title: VNet configuration settings | Azure API Management
+description: Reference for network configuration settings when deploying Azure API Management to a virtual network
+++++ Last updated : 12/16/2021+++
+# Virtual network configuration reference: API Management
+
+This reference provides detailed network configuration settings for an API Management instance deployed in an Azure virtual network in the [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) mode.
+
+For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
+
+## Required ports
+
+Control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security group][NetworkSecurityGroups] rules. If certain ports are unavailable, API Management may not operate properly and may become inaccessible.
+
+When an API Management service instance is hosted in a VNet, the ports in the following table are used. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
+
+>[!IMPORTANT]
+> * **Bold** items in the *Purpose* column indicate port configurations required for successful deployment and operation of the API Management service. Configurations labeled "optional" enable specific features, as noted. They are not required for the overall health of the service.
+>
+> * We recommend using [service tags](../virtual-network/service-tags-overview.md) instead of IP addresses in NSG rules to specify network sources and destinations. Service tags prevent downtime when infrastructure improvements necessitate IP address changes.
++
+### [stv2](#tab/stv2)
+
+| Source / Destination Port(s) | Direction | Transport protocol | Service tags <br> Source / Destination | Purpose | VNet type |
+||--|--||-|-|
+| * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | **Client communication to API Management** | External only |
+| * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | **Management endpoint for Azure portal and PowerShell** | External & Internal |
+| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal |
+| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
+| * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
+| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureKeyVault | **Access to Azure Key Vault** | External & Internal |
+| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / Event Hub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional) | External & Internal |
+| * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
+| * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension (optional) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
+| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
+| * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
+| * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
+| * / 6390 | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | **Azure Infrastructure Load Balancer** | External & Internal |
+
+### [stv1](#tab/stv1)
+
+| Source / Destination Port(s) | Direction | Transport protocol | [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) <br> Source / Destination | Purpose | VNet type |
+||--|--||-|-|
+| * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | **Client communication to API Management** | External only |
+| * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | **Management endpoint for Azure portal and PowerShell** | External & Internal |
+| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal |
+| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) dependency (optional) | External & Internal |
+| * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
+| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / Event Hub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal |
+| * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
+| * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension & Dependency on Event Grid (if events notification activated) (optional) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
+| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
+| * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
+| * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
+| * / * | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | **Azure Infrastructure Load Balancer** (required for Premium SKU, optional for other SKUs) | External & Internal |
+++
+## Regional service tags
+
+NSG rules allowing outbound connectivity to Storage, SQL, and Event Hubs service tags may use the regional versions of those tags corresponding to the region containing the API Management instance (for example, **Storage.WestUS** for an API Management instance in the West US region). In multi-region deployments, the NSG in each region should allow traffic to the service tags for that region and the primary region.
+
+## TLS functionality
+ To enable TLS/SSL certificate chain building and validation, the API Management service needs outbound network connectivity to `ocsp.msocsp.com`, `mscrl.microsoft.com`, and `crl.microsoft.com`. This dependency is not required if any certificate you upload to API Management contains the full chain to the CA root.
+
+## DNS access
+ Outbound access on port `53` is required for communication with DNS servers. If a custom DNS server exists on the other end of a VPN gateway, the DNS server must be reachable from the subnet hosting API Management.
+
+## Metrics and health monitoring
+
+Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains, are represented under the **AzureMonitor** service tag for use with Network Security Groups.
+
+### Metrics and health monitoring
+
+Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains, are represented under the AzureMonitor service tag for use with Network Security Groups.
+
+| Azure Environment | Endpoints |
+ |-||
+| Azure Public | <ul><li>gcs.prod.monitoring.core.windows.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.com</li></ul> |
+| Azure Government | <ul><li>fairfax.warmpath.usgovcloudapi.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>prod5.prod.microsoftmetrics.com</li><li>prod5-black.prod.microsoftmetrics.com</li><li>prod5-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.us</li></ul> |
+| Azure China 21Vianet | <ul><li>mooncake.warmpath.chinacloudapi.cn</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>prod5.prod.microsoftmetrics.com</li><li>prod5-black.prod.microsoftmetrics.com</li><li>prod5-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.cn</li></ul>
+
+## SMTP relay
+
+Allow outbound network connectivity for the SMTP relay, which resolves under the host `smtpi-co1.msn.com`, `smtpi-ch1.msn.com`, `smtpi-db3.msn.com`, `smtpi-sin.msn.com`, and `ies.global.microsoft.com`
+
+> [!NOTE]
+> Only the SMTP relay provided in API Management may be used to send email from your instance.
+
+## Developer portal CAPTCHA
+Allow outbound network connectivity for the developer portal's CAPTCHA, which resolves under the hosts `client.hip.live.com` and `partner.hip.live.com`.
+
+## Publishing the developer portal
+
+Enable publishing the [developer portal](api-management-howto-developer-portal.md) for an API Management instance in a VNet by allowing outbound connectivity to blob storage in the West US region. For example, use the **Storage.WestUS** service tag in an NSG rule. Currently, connectivity to blob storage in the West US region is required to publish the developer portal for any API Management instance.
+
+## Azure portal diagnostics
+ When using the API Management diagnostics extension from inside a VNet, outbound access to `dc.services.visualstudio.com` on port `443` is required to enable the flow of diagnostic logs from Azure portal. This access helps in troubleshooting issues you might face when using the extension.
+
+## Azure load balancer
+ You're not required to allow inbound requests from service tag `AZURE_LOAD_BALANCER` for the Developer SKU, since only one compute unit is deployed behind it. However, inbound connectivity from `AZURE_LOAD_BALANCER` becomes **critical** when scaling to a higher SKU, such as Premium, because failure of the health probe from load balancer then blocks all inbound access to the control plane and data plane.
+
+## Application Insights
+ If you enabled [Azure Application Insights](api-management-howto-app-insights.md) monitoring on API Management, allow outbound connectivity to the [telemetry endpoint](../azure-monitor/app/ip-addresses.md#outgoing-ports) from the VNet.
+
+## KMS endpoint
+
+When adding virtual machines running Windows to the VNet, allow outbound connectivity on port `1688` to the [KMS endpoint](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation#solution) in your cloud. This configuration routes Windows VM traffic to the Azure Key Management Services (KMS) server to complete Windows activation.
+
+## Force tunneling traffic to on-premises firewall Using ExpressRoute or Network Virtual Appliance
+ Commonly, you configure and define your own default route (0.0.0.0/0), forcing all traffic from the API Management subnet to flow through an on-premises firewall or to a network virtual appliance. This traffic flow breaks connectivity with Azure API Management, since outbound traffic is either blocked on-premises, or NAT'd to an unrecognizable set of addresses no longer working with various Azure endpoints. You can solve this issue via one of the following methods:
+
+ * Enable [service endpoints][ServiceEndpoints] on the subnet in which the API Management service is deployed for:
+ * Azure SQL
+ * Azure Storage
+ * Azure Event Hub
+ * Azure Key Vault (required when API Management is deployed on the v2 platform)
+
+ By enabling endpoints directly from the API Management subnet to these services, you can use the Microsoft Azure backbone network, providing optimal routing for service traffic. If you use service endpoints with a force tunneled API Management, the above Azure services traffic isn't force tunneled. The other API Management service dependency traffic is force tunneled and can't be lost. If lost, the API Management service would not function properly.
+
+ * All the control plane traffic from the internet to the management endpoint of your API Management service is routed through a specific set of inbound IPs, hosted by API Management. When the traffic is force tunneled, the responses will not symmetrically map back to these inbound source IPs. To overcome the limitation, set the destination of the following user-defined routes ([UDRs][UDRs]) to the "Internet", to steer traffic back to Azure. Find the set of inbound IPs for control plane traffic documented in [Control plane IP addresses](#control-plane-ip-addresses).
+
+ * For other force tunneled API Management service dependencies, resolve the hostname and reach out to the endpoint. These include:
+ - Metrics and Health Monitoring
+ - Azure portal diagnostics
+ - SMTP relay
+ - Developer portal CAPTCHA
+ - Azure KMS server
+
+## Control plane IP addresses
+
+The following IP addresses are divided by **Azure Environment**. When allowing inbound requests, IP addresses marked with **Global** must be permitted, along with the **Region**-specific IP address. In some cases, two IP addresses are listed. Permit both IP addresses.
+
+> [!IMPORTANT]
+> Control plane IP addresses should be configured for network access rules only when needed in certain networking scenarios. We recommend using the **ApiManagement** [service tag](../virtual-network/service-tags-overview.md) instead of control plane IP addresses to prevent downtime when infrastructure improvements necessitate IP address changes.
+
+| **Azure Environment**| **Region**| **IP address**|
+|--|-||
+| Azure Public| South Central US (Global)| 104.214.19.224|
+| Azure Public| North Central US (Global)| 52.162.110.80|
+| Azure Public| Australia Central| 20.37.52.67|
+| Azure Public| Australia Central 2| 20.39.99.81|
+| Azure Public| Australia East| 20.40.125.155|
+| Azure Public| Australia Southeast| 20.40.160.107|
+| Azure Public| Brazil South| 191.233.24.179, 191.238.73.14|
+| Azure Public| Brazil Southeast| 191.232.18.181|
+| Azure Public| Canada Central| 52.139.20.34, 20.48.201.76|
+| Azure Public| Canada East| 52.139.80.117|
+| Azure Public| Central India| 13.71.49.1, 20.192.45.112|
+| Azure Public| Central US| 13.86.102.66|
+| Azure Public| Central US EUAP| 52.253.159.160|
+| Azure Public| East Asia| 52.139.152.27|
+| Azure Public| East US| 52.224.186.99|
+| Azure Public| East US 2| 20.44.72.3|
+| Azure Public| East US 2 EUAP| 52.253.229.253|
+| Azure Public| France Central| 40.66.60.111|
+| Azure Public| France South| 20.39.80.2|
+| Azure Public| Germany North| 51.116.0.0|
+| Azure Public| Germany West Central| 51.116.96.0, 20.52.94.112|
+| Azure Public| Japan East| 52.140.238.179|
+| Azure Public| Japan West| 40.81.185.8|
+| Azure Public| India Central| 20.192.234.160|
+| Azure Public| India West| 20.193.202.160|
+| Azure Public| Korea Central| 40.82.157.167, 20.194.74.240|
+| Azure Public| Korea South| 40.80.232.185|
+| Azure Public| North Central US| 40.81.47.216|
+| Azure Public| North Europe| 52.142.95.35|
+| Azure Public| Norway East| 51.120.2.185|
+| Azure Public| Norway West| 51.120.130.134|
+| Azure Public| South Africa North| 102.133.130.197, 102.37.166.220|
+| Azure Public| South Africa West| 102.133.0.79|
+| Azure Public| South Central US| 20.188.77.119, 20.97.32.190|
+| Azure Public| South India| 20.44.33.246|
+| Azure Public| Southeast Asia| 40.90.185.46|
+| Azure Public| Switzerland North| 51.107.0.91|
+| Azure Public| Switzerland West| 51.107.96.8|
+| Azure Public| UAE Central| 20.37.81.41|
+| Azure Public| UAE North| 20.46.144.85|
+| Azure Public| UK South| 51.145.56.125|
+| Azure Public| UK West| 51.137.136.0|
+| Azure Public| West Central US| 52.253.135.58|
+| Azure Public| West Europe| 51.145.179.78|
+| Azure Public| West India| 40.81.89.24|
+| Azure Public| West US| 13.64.39.16|
+| Azure Public| West US 2| 51.143.127.203|
+| Azure Public| West US 3| 20.150.167.160|
+| Azure China 21Vianet| China North (Global)| 139.217.51.16|
+| Azure China 21Vianet| China East (Global)| 139.217.171.176|
+| Azure China 21Vianet| China North| 40.125.137.220|
+| Azure China 21Vianet| China East| 40.126.120.30|
+| Azure China 21Vianet| China North 2| 40.73.41.178|
+| Azure China 21Vianet| China East 2| 40.73.104.4|
+| Azure Government| USGov Virginia (Global)| 52.127.42.160|
+| Azure Government| USGov Texas (Global)| 52.127.34.192|
+| Azure Government| USGov Virginia| 52.227.222.92|
+| Azure Government| USGov Iowa| 13.73.72.21|
+| Azure Government| USGov Arizona| 52.244.32.39|
+| Azure Government| USGov Texas| 52.243.154.118|
+| Azure Government| USDoD Central| 52.182.32.132|
+| Azure Government| USDoD East| 52.181.32.192|
++
+## Next steps
+
+Learn more about:
+
+* [Connecting a virtual network to backend using VPN Gateway](../vpn-gateway/design.md#s2smulti)
+* [Connecting a virtual network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
+* [Debug your APIs using request tracing](api-management-howto-api-inspector.md)
+* [Virtual Network frequently asked questions](../virtual-network/virtual-networks-faq.md)
+* [Service tags](../virtual-network/network-security-groups-overview.md#service-tags)
+
+[api-management-using-vnet-menu]: ./media/api-management-using-with-vnet/api-management-menu-vnet.png
+[api-management-setup-vpn-select]: ./media/api-management-using-with-vnet/api-management-using-vnet-select.png
+[api-management-setup-vpn-add-api]: ./media/api-management-using-with-vnet/api-management-using-vnet-add-api.png
+[api-management-vnet-public]: ./media/api-management-using-with-vnet/api-management-vnet-external.png
+
+[Enable VPN connections]: #enable-vpn
+[Connect to a web service behind VPN]: #connect-vpn
+[Related content]: #related-content
+
+[UDRs]: ../virtual-network/virtual-networks-udr-overview.md
+[NetworkSecurityGroups]: ../virtual-network/network-security-groups-overview.md
+[ServiceEndpoints]: ../virtual-network/virtual-network-service-endpoints-overview.md
+[ServiceTags]: ../virtual-network/network-security-groups-overview.md#service-tags
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-container-github-action.md
Title: Custom container CI/CD from GitHub Actions description: Learn how to use GitHub Actions to deploy your custom Linux container to App Service from a CI/CD pipeline. Previously updated : 12/04/2020 Last updated : 12/15/2021
For an Azure App Service container workflow, the file has three sections:
## Generate deployment credentials
-The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal but the process requires more steps.
+The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal or Open ID Connect but the process requires more steps.
Save your publish profile credential or service principal as a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) to authenticate with Azure. You'll access the secret within your workflow.
In the example, replace the placeholders with your subscription ID, resource gro
> [!IMPORTANT] > It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
+# [OpenID Connect](#tab/openid)
+
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal). Create the Active Directory application.
+
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
+
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+* Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+* Set a value for `CREDENTIAL-NAME` to reference later.
+* Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+```azurecli
+az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
++ ## Configure the GitHub secret for authentication
When you configure the workflow file later, you use the secret for the input `cr
with: creds: ${{ secrets.AZURE_CREDENTIALS }} ```
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets. You can find these values in the Azure portal by searching for your active directory application.
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
jobs:
az logout ```
+# [OpenID Connect](#tab/openid)
+
+```yaml
+on: [push]
+name: Linux_Container_Node_Workflow
+
+permissions:
+ id-token: write
+ contents: read
+
+jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+ # checkout the repo
+ - name: 'Checkout GitHub Action'
+ uses: actions/checkout@main
+
+ - name: 'Login via Azure CLI'
+ uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - uses: azure/docker-login@v1
+ with:
+ login-server: mycontainer.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
+ - run: |
+ docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
+ docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
+
+ - uses: azure/webapps-deploy@v2
+ with:
+ app-name: 'myapp'
+ images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}'
+
+ - name: Azure logout
+ run: |
+ az logout
+```
## Next steps
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-github-actions.md
Title: Configure CI/CD with GitHub Actions description: Learn how to deploy your code to Azure App Service from a CI/CD pipeline with GitHub Actions. Customize the build tasks and execute complex deployments. Previously updated : 09/14/2020 Last updated : 12/14/2021
You can also deploy a workflow without using the Deployment Center. To do so, yo
## Generate deployment credentials
-The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal but the process requires more steps.
+The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal or Open ID Connect but the process requires more steps.
Save your publish profile credential or service principal as a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) to authenticate with Azure. You'll access the secret within your workflow.
In the example above, replace the placeholders with your subscription ID, resour
> [!IMPORTANT] > It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
+# [OpenID Connect](#tab/openid)
+
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal). Create the Active Directory application.
+
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
+
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+* Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+* Set a value for `CREDENTIAL-NAME` to reference later.
+* Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+```azurecli
+az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+ ## Configure the GitHub secret
When you configure the workflow file later, you use the secret for the input `cr
creds: ${{ secrets.AZURE_CREDENTIALS }} ```
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+ ## Set up the environment
jobs:
az logout ```
+# [OpenID Connect](#tab/openid)
+
+### .NET Core
+
+Build and deploy a .NET Core app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action.
++
+```yaml
+name: .NET Core
+
+on: [push]
+
+permissions:
+ id-token: write
+ contents: read
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+ DOTNET_VERSION: '3.1.x' # set this to the dot net version to use
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ # Checkout the repo
+ - uses: actions/checkout@main
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+
+ # Setup .NET Core SDK
+ - name: Setup .NET Core
+ uses: actions/setup-dotnet@v1
+ with:
+ dotnet-version: ${{ env.DOTNET_VERSION }}
+
+ # Run dotnet build and publish
+ - name: dotnet build and publish
+ run: |
+ dotnet restore
+ dotnet build --configuration Release
+ dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+
+ # Deploy to Azure Web apps
+ - name: 'Run Azure webapp deploy action using publish profile credentials'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name
+ package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+
+ - name: logout
+ run: |
+ az logout
+```
+
+### ASP.NET
+
+Build and deploy a ASP.NET MVC app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action.
+
+```yaml
+name: Deploy ASP.NET MVC App deploy to Azure Web App
+
+on: [push]
+
+permissions:
+ id-token: write
+ contents: read
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+ NUGET_VERSION: '5.3.x' # set this to the dot net version to use
+
+jobs:
+ build-and-deploy:
+ runs-on: windows-latest
+ steps:
+
+ # checkout the repo
+ - uses: actions/checkout@main
+
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - name: Install Nuget
+ uses: nuget/setup-nuget@v1
+ with:
+ nuget-version: ${{ env.NUGET_VERSION}}
+ - name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file
+ run: nuget restore
+
+ - name: Add msbuild to PATH
+ uses: microsoft/setup-msbuild@v1.0.2
+
+ - name: Run MSBuild
+ run: msbuild .\SampleWebApplication.sln
+
+ - name: 'Run Azure webapp deploy action using publish profile credentials'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name
+ package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/SampleWebApplication/'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+```
+
+### Java
+
+Build and deploy a Java Spring app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action.
+
+```yaml
+name: Java CI with Maven
+
+on: [push]
+
+permissions:
+ id-token: write
+ contents: read
+
+jobs:
+ build:
+
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ - name: Set up JDK 1.8
+ uses: actions/setup-java@v1
+ with:
+ java-version: 1.8
+ - name: Build with Maven
+ run: mvn -B package --file pom.xml
+ working-directory: complete
+ - name: Azure WebApp
+ uses: Azure/webapps-deploy@v2
+ with:
+ app-name: my-app-name
+ package: my/target/*.jar
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+```
+
+### JavaScript
+
+Build and deploy a Node.js app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action.
++
+```yaml
+name: JavaScript CI
+
+on: [push]
+
+permissions:
+ id-token: write
+ contents: read
+
+name: Node.js
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ NODE_VERSION: '14.x' # set this to the node version to use
+
+jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+ # checkout the repo
+ - name: 'Checkout GitHub Action'
+ uses: actions/checkout@main
+
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - name: Setup Node ${{ env.NODE_VERSION }}
+ uses: actions/setup-node@v1
+ with:
+ node-version: ${{ env.NODE_VERSION }}
+
+ - name: 'npm install, build, and test'
+ run: |
+ npm install
+ npm run build --if-present
+ npm run test --if-present
+ working-directory: my-app-path
+
+ # deploy web app using Azure credentials
+ - uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+```
+
+### Python
+
+Build and deploy a Python app to Azure using an Azure service principal. The example uses GitHub secrets for the `client-id`, `tenant-id`, and `subscription-id` values. You can also pass these values directly in the login action.
+
+```yaml
+name: Python application
+
+on:
+ [push]
+
+permissions:
+ id-token: write
+ contents: read
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - name: Set up Python 3.x
+ uses: actions/setup-python@v2
+ with:
+ python-version: 3.x
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+ - name: Deploy web App using GH Action azure/webapps-deploy
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+ - name: logout
+ run: |
+ az logout
+```
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
The deployment process to Azure App Service will use your Azure credentials from
Run the Maven command below to configure the deployment. This command will help you to set up the App Service operating system, Java version, and Tomcat version. ```azurecli-interactive
-mvn com.microsoft.azure:azure-webapp-maven-plugin:2.2.3:config
+mvn com.microsoft.azure:azure-webapp-maven-plugin:2.3.0:config
``` ::: zone pivot="platform-windows"
JBoss EAP is only available on the Linux version of App Service. Select the **Li
1. When prompted with **Web App** option, select the default option, `<create>`, by pressing enter. 1. When prompted with **OS** option, select **Linux** by pressing enter. 1. When prompted with **javaVersion** option, select **Java 11** by entering `2`.
-1. When prompted with **webcontainer** option, select **Tomcat 8.5** by entering `3`.
+1. When prompted with **webcontainer** option, select **Tomcat 8.5** by entering `2`.
1. When prompted with **Pricing Tier** option, select **P1v2** by entering `9`. 1. Finally, press enter on the last prompt to confirm your selections.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/reference-app-settings.md
The following are 'fake' environment variables that don't exist if you enumerate
| Setting name| Description | |-|-| | `WEBSITE_LOCAL_CACHE_OPTION` | Whether local cache is enabled. Available options are: <br/>- `Default`: Inherit the stamp-level global setting.<br/>- `Always`: Enable for the app.<br/>- OnStorageUnavailability<br/>- `Disabled`: Disabled for the app. |
-| `WEBSITE_LOCAL_CACHE_READWRITE_OPTION` | Read-write options of the local cache. Available options are: <br/>- `ReadOnly`: Cache is read-only.<br/>- `WriteWithCopyBack`: Allow writes to local cache and copy periodically to shared storage. Applicable only for single instance apps as the SCM site points to local cache.<br/>- `WriteButDiscardChanges`: Allow writes to local cache but discard changes made locally. |
+| `WEBSITE_LOCAL_CACHE_READWRITE_OPTION` | Read-write options of the local cache. Available options are: <br/>- `ReadOnly`: Cache is read-only.<br/>- `WriteButDiscardChanges`: Allow writes to local cache but discard changes made locally. |
| `WEBSITE_LOCAL_CACHE_SIZEINMB` | Size of the local cache in MB. Default is `1000` (1 GB). | | `WEBSITE_LOCALCACHE_READY` | Read-only flag indicating if the app using local cache. | | `WEBSITE_DYNAMIC_CACHE` | Due to network file shared nature to allow access for multiple instances, the dynamic cache improves performance by caching the recently accessed files locally on an instance. Cache is invalidated when file is modified. The cache location is `%SYSTEMDRIVE%\local\DynamicCache` (same `%SYSTEMDRIVE%\local` quota is applied). By default, full content caching is enabled (set to `1`), which includes both file content and directory/file metadata (timestamps, size, directory content). To conserve local disk use, set to `2` to cache only directory/file metadata (timestamps, size, directory content). To turn off caching, set to `0`. |
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/overview.md
The following features and development options are supported by the Form Recogn
| Feature | Description | Development options | |-|--|-|
-|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#try-it-general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#try-it-layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
+|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom forms**.</li></ul>| <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#custom-projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>| |[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#try-it-prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#try-it-prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#try-it-prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](quickstarts/try-v3-form-recognizer-studio.md#prebuilt-models)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 12/09/2021 Last updated : 01/04/2022 recommendations: false -
+<!-- markdownlint-disable MD025 -->
# Quickstart: C# client library SDK v3.0 | Preview >[!NOTE]
To interact with the Form Recognizer service, you'll need to create an instance
:::image type="content" source="../media/quickstarts/add-code-here.png" alt-text="Screenshot: add the sample code to the Main method.":::
+> [!TIP]
+> If you would like to try more than one code sample:
+>
+> * Select one of the sample code blocks below to copy and paste into your application.
+> * [**Run your application**](#run-your-application).
+> * Comment out that sample code block but keep the set-up code and library directives.
+> * Select another sample code block to copy and paste into your application.
+> * [**Run your application**](#run-your-application).
+> * You can continue to comment out, copy/paste, and run the sample blocks of code.
+ ### Select one of the following code samples to copy and paste into your application Program.cs file: * [**General document model**](#general-document-model)
You are not limited to invoicesΓÇöthere are several prebuilt models to choose fr
* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports. * [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
-#### Try the prebuilt invoice sample
+#### Try the prebuilt invoice model
> [!div class="checklist"] >
-> * For this example, we wll analyze an invoice document using a prebuilt model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URI value to the `Uri fileUri` variable at the top of the Program.cs file. > * To analyze a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-invoice` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 11/02/2021 Last updated : 01/04/2022 recommendations: false
+<!-- markdownlint-disable MD025 -->
# Quickstart: Java client library SDK v3.0 | Preview
To learn more about Form Recognizer features and development options, visit our
In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
-* [🆕 **General document**](#try-it-general-document-model)—Analyze and extract text, tables, structure, key-value pairs and named entities.
+* [🆕 **General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* [**Layout**](#try-it-layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-* [**Prebuilt Invoice**](#try-it-prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
+* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* A [Java Development Kit (JDK)](/java/azure/jdk/?view=azure-java-stable&preserve-view=true
-), version 8 or later
+* The latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+
+ >[!TIP]
+ >
+ > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
+ > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
+
+* If you aren't using VS Code, make sure you have the following installed in your development environment:
+
+ * A [**Java Development Kit** (JDK)](https://www.oracle.com/java/technologies/downloads/) version 8 or later.
+
+ * [**Gradle**](https://gradle.org/), version 6.8 or later.
+ * A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-> [!TIP]
-> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+ > [!TIP]
+ > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
-* After your resource deploys, click **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
+* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. Later, you'll paste your key and endpoint into the code below:
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal."::: ## Set up
-### Create a new Gradle project
+#### Create a new Gradle project
-In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **form-recognizer-app**, and navigate to it.
+1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **form-recognizer-app**, and navigate to it.
-```console
-mkdir form-recognizer-app && form-recognizer-app
-```
+ ```console
+ mkdir form-recognizer-app && form-recognizer-app
+ ```
-Run the `gradle init` command from your working directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
+1. Run the `gradle init` command from your working directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
-```console
-gradle init --type basic
-```
+ ```console
+ gradle init --type basic
+ ```
1. When prompted to choose a **DSL**, select **Kotlin**. 1. Accept the default project name (form-recognizer-app)
-### Install the client library
+#### Install the client library
This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the [Maven Central Repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer).
-In your project's *build.gradle.kts* file, include the client library as an `implementation` statement, along with the required plugins and settings.
+1. Open the project's *build.gradle.kts* file in your IDE. Copay and past the following code to include the client library as an `implementation` statement, along with the required plugins and settings.
-```kotlin
-plugins {
- java
- application
-}
-application {
- mainClass.set("FormRecognizer")
-}
-repositories {
- mavenCentral()
-}
-dependencies {
- implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "4.0.0-beta.1")
-}
-```
+ ```kotlin
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClass.set("FormRecognizer")
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "4.0.0-beta.2")
+ }
+ ```
-### Create a Java file
+#### Create a Java file
-From your working directory, run the following command:
+1. From the form-recognizer-app directory, run the following command:
-```console
-mkdir -p src/main/java
-```
+ ```console
+ mkdir -p src/main/java
+ ```
+
+ You will create the following directory structure:
+
+ :::image type="content" source="../media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure":::
-You will create the following directory structure:
+1. Navigate to the `java` directory and create a file called *FormRecognizer.java*.
+ > [!TIP]
+ >
+ > * You can create a new file using Powershell.
+ > * Open a Powershell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item FormRecognizer.java**.
-Navigate to the new folder and create a file called *FormRecognizer.java* (created during the `gradle init` command). Open it in your preferred editor or IDE and add the following package declaration and `import` statements:
+1. Open the `FormRecognizer.java` file in your preferred editor or IDE and add the following `import` statements:
+
+ ```java
+ import com.azure.ai.formrecognizer.*;
+ import com.azure.ai.formrecognizer.models.AnalyzeResult;
+ import com.azure.ai.formrecognizer.models.DocumentLine;
+ import com.azure.ai.formrecognizer.models.AnalyzedDocument;
+ import com.azure.ai.formrecognizer.models.DocumentOperationResult;
+ import com.azure.ai.formrecognizer.models.DocumentWord;
+ import com.azure.ai.formrecognizer.models.DocumentTable;
+ import com.azure.core.credential.AzureKeyCredential;
+ import com.azure.core.util.polling.SyncPoller;
+
+ import java.util.List;
+ import java.util.Arrays;
+ ```
+
+#### Create the **FormRecognizer** class:
+
+Next, you'll need to create a public class for your project:
```java
-package com.azure.ai.formrecognizer;
-
-import com.azure.ai.formrecognizer.models.AnalyzeDocumentOptions;
-import com.azure.ai.formrecognizer.models.AnalyzedDocument;
-import com.azure.core.credential.AzureKeyCredential;
-import com.azure.core.http.HttpPipeline;
-import com.azure.core.http.HttpPipelineBuilder;
-import com.azure.core.util.Context;
-
-import java.io.ByteArrayInputStream;
-import java.io.File;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.file.Files;
-import java.util.Arrays;
+public class FormRecognizer {
+ // All project code goes here...
+}
```
-### Select a code sample to copy and paste into your application's main method:
+> [!TIP]
+> If you would like to try more than one code sample:
+>
+> * Select one of the sample code blocks below to copy and paste into your application.
+> * [**Build and run your application**](#build-and-run-your-application).
+> * Comment out that sample code block but keep the set-up code and library directives.
+> * Select another sample code block to copy and paste into your application.
+> * [**Build and run your application**](#build-and-run-your-application).
+> * You can continue to comment out, copy/paste, build, and run the sample blocks of code.
+
+#### Select a code sample to copy and paste into your application's main method:
-* [**General document**](#try-it-general-document-model)
+* [**General document**](#general-document-model)
-* [**Layout**](#try-it-layout-model)
+* [**Layout**](#layout-model)
-* [**Prebuilt Invoice**](#try-it-prebuilt-model)
+* [**Prebuilt Invoice**](#prebuilt-model)
> [!IMPORTANT] >
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article for more information.
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, see* the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
-## **Try it**: General document model
+## General document model
+
+Extract text, tables, structure, key-value pairs, and named entities from documents.
> [!div class="checklist"] >
import java.util.Arrays;
> * We've added the file URI value to the `documentUrl` variable in the main method. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-Update your application's **FormRecognizer** class, with the following code (be certain to update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal):
+Add the following code to the `FormRecognizer` class. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:
```java
-public class FormRecognizer {
- static final String key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
- static final String endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
+ private static final String key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
+ private static final String endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
public static void main(String[] args) {
- DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential("{key}"))
- .endpoint("{endpoint}")
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
.buildClient(); String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"; String modelId = "prebuilt-document";
- SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeDocumentPoller =
- documentAnalysisClient.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+ SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
- for (int i = 0; i < analyzeResult.getDocuments().size(); i++) {
- final AnalyzedDocument analyzedDocument = analyzeResult.getDocuments().get(i);
- System.out.printf("-- Analyzing document %d --%n", i);
- System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
- analyzedDocument.getDocType(), analyzedDocument.getConfidence());
- }
-
- analyzeResult.getPages().forEach(documentPage - > {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
+ // pages
+ analyzeResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
// lines
- documentPage.getLines().forEach(documentLine - >
+ documentPage.getLines().forEach(documentLine ->
System.out.printf("Line %s is within a bounding box %s.%n", documentLine.getContent(), documentLine.getBoundingBox().toString())); // words
- documentPage.getWords().forEach(documentWord - >
+ documentPage.getWords().forEach(documentWord ->
System.out.printf("Word %s has a confidence score of %.2f%n.", documentWord.getContent(), documentWord.getConfidence())); }); // tables
- List < DocumentTable > tables = analyzeResult.getTables();
+ List <DocumentTable> tables = analyzeResult.getTables();
for (int i = 0; i < tables.size(); i++) { DocumentTable documentTable = tables.get(i); System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(), documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell - > {
+ documentTable.getCells().forEach(documentTableCell -> {
System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(), documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
public class FormRecognizer {
} // Entities
- analyzeResult.getEntities().forEach(documentEntity - > {
+ analyzeResult.getEntities().forEach(documentEntity -> {
System.out.printf("Entity category : %s, sub-category %s%n: ", documentEntity.getCategory(), documentEntity.getSubCategory()); System.out.printf("Entity content: %s%n: ", documentEntity.getContent()); System.out.printf("Entity confidence: %.2f%n", documentEntity.getConfidence()); });
- // Key-value
- analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair - > {
+ // Key-value pairs
+ analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent()); System.out.printf("Key content bounding region: %s%n", documentKeyValuePair.getKey().getBoundingRegions().toString());
- System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
- System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
- });
- Build a custom document analysis model
- In 3. x.x, creating a custom model required specifying useTrainingLabels to indicate whether to use labeled data when creating the custom model with the beginTraining method.
- In 4. x.x, we introduced the new general document model(prebuilt - document) to replace the train without labels functionality from 3. x.x which extracts entities, key - value pairs, and layout from a document with the beginBuildModel method.In 4. x.x the beginBuildModel always returns labeled data otherwise.
- Train a custom model using 3. x.x beginTraining:
-
- String trainingFilesUrl = "{SAS_URL_of_your_container_in_blob_storage}";
- SyncPoller < FormRecognizerOperationResult, CustomFormModel > trainingPoller =
- formTrainingClient.beginTraining(trainingFilesUrl,
- false,
- new TrainingOptions()
- .setModelName("my model trained without labels"),
- Context.NONE);
-
- CustomFormModel customFormModel = trainingPoller.getFinalResult();
-
- // Model Info
- System.out.printf("Model Id: %s%n", customFormModel.getModelId());
- System.out.printf("Model name given by user: %s%n", customFormModel.getModelName());
- System.out.printf("Model Status: %s%n", customFormModel.getModelStatus());
- System.out.printf("Training started on: %s%n", customFormModel.getTrainingStartedOn());
- System.out.printf("Training completed on: %s%n%n", customFormModel.getTrainingCompletedOn());
-
- System.out.println("Recognized Fields:");
- // looping through the subModels, which contains the fields they were trained on
- // Since the given training documents are unlabeled we still group them but, they do not have a label.
- customFormModel.getSubmodels().forEach(customFormSubmodel - > {
- System.out.printf("Submodel Id: %s%n: ", customFormSubmodel.getModelId());
- // Since the training data is unlabeled, we are unable to return the accuracy of this model
- customFormSubmodel.getFields().forEach((field, customFormModelField) - >
- System.out.printf("Field: %s Field Label: %s%n",
- field, customFormModelField.getLabel()));
- });
-
- System.out.println();
- customFormModel.getTrainingDocuments().forEach(trainingDocumentInfo - > {
- System.out.printf("Document name: %s%n", trainingDocumentInfo.getName());
- System.out.printf("Document status: %s%n", trainingDocumentInfo.getStatus());
- System.out.printf("Document page count: %d%n", trainingDocumentInfo.getPageCount());
- if (!trainingDocumentInfo.getErrors().isEmpty()) {
- System.out.println("Document Errors:");
- trainingDocumentInfo.getErrors().forEach(formRecognizerError - >
- System.out.printf("Error code %s, Error message: %s%n", formRecognizerError.getErrorCode(),
- formRecognizerError.getMessage()));
+ if (documentKeyValuePair.getValue() != null) {
+ System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
+ System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
} }); } ```
-## **Try it**: Layout model
+## Layout model
-Extract text, selection marks, text styles, and table structures, along with their bounding region coordinates from documents.
+Extract text, selection marks, text styles, table structures, and bounding region coordinates from documents.
> [!div class="checklist"] >
Extract text, selection marks, text styles, and table structures, along with the
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocumentFromUrl` method and pass `prebuilt-layout` as the model Id. The returned value is an `AnalyzeResult` object containing data about the submitted document. > * We've added the file URI value to the `documentUrl` variable in the main method.
-Update your application's **FormRecognizer** class, with the following code (be certain to update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal):
+#### Update the **FormRecognizer** class:
-```java
-public class FormRecognizer {
+Add the following code to the `FormRecognizer` class. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:
- static final String key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
- static final String endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
-
- public static void main(String[] args) {
+```java
+public static void main(String[] args) {
- DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential("{key}"))
- .endpoint("{endpoint}")
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
.buildClient();
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
String modelId = "prebuilt-layout"; SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
- documentAnalysisClient.beginAnalyzeDocument(modelId, documentUrl);
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
- // pages
- analyzeLayoutResult.getPages().forEach(documentPage - > {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
+ // pages
+ analyzeLayoutResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
// lines
- documentPage.getLines().forEach(documentLine - >
+ documentPage.getLines().forEach(documentLine ->
System.out.printf("Line %s is within a bounding box %s.%n", documentLine.getContent(), documentLine.getBoundingBox().toString()));
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word '%s' has a confidence score of %.2f.%n",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+ // selection marks
- documentPage.getSelectionMarks().forEach(documentSelectionMark - >
+ documentPage.getSelectionMarks().forEach(documentSelectionMark ->
System.out.printf("Selection mark is %s and is within a bounding box %s with confidence %.2f.%n", documentSelectionMark.getState().toString(), documentSelectionMark.getBoundingBox().toString(),
public class FormRecognizer {
DocumentTable documentTable = tables.get(i); System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(), documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell - > {
+ documentTable.getCells().forEach(documentTableCell -> {
System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(), documentTableCell.getRowIndex(), documentTableCell.getColumnIndex()); });
public class FormRecognizer {
} ```
-## **Try it**: Prebuilt model
+## Prebuilt model
Extract and analyze data from common document types using a pre-trained model. ##### Choose a prebuilt model ID
-You are not limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
+You're not limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the model IDs for the prebuilt models currently supported by the Form Recognizer service:
* [**prebuilt-invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices. * [**prebuilt-receipt**](../concept-receipt.md): extracts text and key information from receipts. * [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports. * [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
-#### Try the prebuilt invoice sample
+#### Try the prebuilt invoice model
> [!div class="checklist"] >
-> * You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URL value to the `invoiceUrl` variable at the top of the file. > * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-Update your application's **FormRecognizer** class, with the following code (be certain to update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal):
+#### Update the **FormRecognizer** class:
+
+Replace the existing FormRecognizer class with the following code (be certain to update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal):
```java
gradle build
gradle run ```
-Congratulations! In this quickstart, you used the Form Recognizer Java SDK to analyze various forms and documents in different ways. Next, explore the reference documentation to learn about Form Recognizer API in more depth.
+That's it, congratulations!
+
+In this quickstart, you used the Form Recognizer Java SDK to analyze various forms and documents in different ways. Next, explore the reference documentation to learn about Form Recognizer API in more depth.
## Next steps
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 11/02/2021 Last updated : 01/04/2022 recommendations: false -
+<!-- markdownlint-disable MD025 -->
# Quickstart: Form Recognizer JavaScript client library SDKs v3.0 | Preview >[!NOTE]
const endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
const apiKey = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE"; ```
+> [!TIP]
+> If you would like to try more than one code sample:
+>
+> * Select one of the sample code blocks below to copy and paste into your application.
+> * [**Run your application**](#run-your-application).
+> * Comment out that sample code block but keep the set-up code and library directives.
+> * Select another sample code block to copy and paste into your application.
+> * [**Build and run your application**](#run-your-application).
+> * You can continue to comment out, copy/paste, and run the sample blocks of code.
+ ### Select a code sample to copy and paste into your application: * [**General document**](#general-document-model)
You are not limited to invoicesΓÇöthere are several prebuilt models to choose fr
* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports. * [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
-#### Try the prebuilt invoice sample
+#### Try the prebuilt invoice model
> [!div class="checklist"] >
-> * You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URL value to the `invoiceUrl` variable at the top of the file. > * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 12/13/2021 Last updated : 01/04/2022 recommendations: false -
+<!-- markdownlint-disable MD025 -->
# Quickstart: Python client library SDK v3.0 | Preview >[!NOTE]
key = "YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY"
```
+> [!TIP]
+> If you would like to try more than one code sample:
+>
+> * Select one of the sample code blocks below to copy and paste into your application.
+> * [**Run your application**](#run-your-application).
+> * Comment out that sample code block but keep the set-up code and library directives.
+> * Select another sample code block to copy and paste into your application.
+> * [**Run your application**](#run-your-application).
+> * You can continue to comment out, copy/paste, and run the sample blocks of code.
+ ### Select a code sample to copy and paste into your application: * [**General document**](#general-document-model)
You are not limited to invoicesΓÇöthere are several prebuilt models to choose fr
* [**prebuilt-idDocument**](../concept-id-document.md): extracts text and key information from driver licenses and international passports. * [**prebuilt-businessCard**](../concept-business-card.md): extracts text and key information from business cards.
-#### Try the prebuilt invoice sample
+#### Try the prebuilt invoice model
> [!div class="checklist"] >
-> * You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
+> * We wll analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
> * We've added the file URL value to the `invoiceUrl` variable at the top of the file. > * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
You will need a direct mode data controller with the imageTag v1.0.0_2021-07-30
To check the version, run: ```console
-kubectl get datacontrollers -n -o custom-columns=BUILD:.spec.docker.imageTag
+kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.imageTag
``` ## Install tools
Ready
## Troubleshoot upgrade problems
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
+If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
In this article, you will apply a .yaml file to:
> Some of the data services tiers and modes are generally available and some are in preview. > If you install GA and preview services on the same data controller, you can't upgrade in place. > To upgrade, delete all non-GA database instances. You can find the list of generally available
-> and preview services in the [Release Notes](/release-notes).
+> and preview services in the [Release Notes](/azure/azure-arc/data/release-notes).
## Prerequisites
To specify the service account:
1. Describe the service account in a .yaml file. The following example sets a name for `ServiceAccount` as `sa-arc-upgrade-worker`:
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/service-account.yaml":::
- <!-- https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/service-account.yaml-->
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="2-4":::
1. Edit the file as needed.
A cluster role (`ClusterRole`) grants the service account permission to perform
1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/cluster-role.yaml":::
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
1. Edit the file as needed.
A cluster role binding (`ClusterRoleBinding`) links the service account and the
1. Describe the cluster role binding in a .yaml file. The following example describes a cluster role binding for the service account.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/cluster-role-binding.yaml":::
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
1. Edit the file as needed.
A job creates a pod to execute the upgrade.
1. Describe the job in a .yaml file. The following example creates a job called `arc-bootstrapper-upgrade-job`.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/job.yaml":::
+ :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="31-48":::
1. Edit the file for your environment.
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
In Azure Functions, a function project is a container for one or more individual
```console func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" ```- `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*.
+
+ Get the list of templates by using the following command.
+
+ ```console
+ func templates list -l python
+ ```
+
### (Optional) Examine the file contents
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-action-rules.md
In the fourth tab (**Details**), you give this rule a name, pick where it will b
### [Azure CLI](#tab/azure-cli) > [!NOTE]
-> The Azure CLI is in the process of being updated to leverage the latest preview version of alert processing rules. Until then, you can use existing CLI capabilities under the **action rule** command to create alert processing rules. That CLI does not support some of the newer alert processing rules features.
+> The Azure CLI is in the process of being updated to leverage the GA API of alert processing rules. Until then, you can use existing CLI capabilities under the **action rule** command to create alert processing rules. Meanwhile, the existing CLI does not support some of the newer alert processing rules features.
You can create alert processing rules with the Azure CLI using the [az monitor action-rule create](/cli/azure/monitor/action-rule#az_monitor_action_rule_create) command. The `az monitor action-rule` reference is just one of many [Azure CLI references for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor).
From here, you can enable, disable, or delete alert processing rules at scale by
### [Azure CLI](#tab/azure-cli) > [!NOTE]
-> The Azure CLI is in the process of being updated to leverage the latest preview version of alert processing rules. Until then, you can use existing CLI capabilies under the **action rule** command to create alert processing rules. That CLI does not support some of the newer alert processing rules features.
+> The Azure CLI is in the process of being updated to leverage the GA API of alert processing rules. Until then, you can use existing CLI capabilies under the **action rule** command to create alert processing rules. Meanwhile, the existing CLI does not support some of the newer alert processing rules features.
You can view and manage your alert processing rules using the [az monitor action-rule](/cli/azure/monitor) command from the Azure CLI.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Last updated 12/01/2021
Log Analytics workspace data export in Azure Monitor allows you to continuously export data from selected tables in your Log Analytics workspace to an Azure storage account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces. ## Overview
-Data stored in Log Analytics is available for the retention period defined in your workspace and used in Azure Monitor and Azure experiences, where more capabilities can be met using export:
-* Temper protected store compliance -- data can't be altered in Log Analytics once ingested, but can be purged. Export to storage account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to protect data from changes.
-* Integration with Azure services and other tools -- export to event hub in near-real-time lets you integrate with services and tools of your choice.
-* Keep data for long time for compliance and in low cost -- export to storage account in the same region as your workspace, replicate data to other storage accounts in other regions using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS.
+Data in Log Analytics is available for the retention period defined in your workspace and used in various experiences provided in Azure Monitor and other Azure services. There are more capabilities that can be met with data export:
+* Comply with tamper protected store requirement -- data can't be altered in Log Analytics once ingested, but can be purged. Export to storage account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to keep data tamper protected.
+* Integration with Azure services and other tools -- export to event hub in near-real-time to send data to your services and tools at it arrives to Azure Monitor.
+* Keep audit and security data for long time at low cost -- export to storage account in the same region as your workspace, or replicate data to other storage accounts in other regions using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS.
Once data export is configured in your Log Analytics workspace, any new data sent to the selected tables in the workspace is automatically exported in near-real-time to your storage account or to your event hub.
Log Analytics workspace data export continuously exports data from a Log Analyti
## Limitations -- You can use Azure portal, CLI, or REST requests in data export configuration. PowerShell isn't supported yet. - All tables will be supported in export, but support is currently limited to those specified in the [supported tables](#supported-tables) section below. - The current custom log tables wonΓÇÖt be supported in export. A new version of custom log preview available February 2022, will be supported in export. - You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled.
Log Analytics workspace data export continuously exports data from a Log Analyti
Data export is optimized for moving large data volume to your destinations and in certain retry conditions, can include a fraction of duplicated records. The export operation to your destination could fail when ingress limits are reached, see details under [Create or update data export rule](#create-or-update-data-export-rule). Export continues to retry for up to 30 minutes and if destination is unavailable to accept data, data will be discarded until the destination becomes available. ## Cost
-Currently, there are no other charges for the data export feature. Pricing for data export will be announced in the future and a notice period provided prior to the start of billing. If you choose to continue using data export after the notice period, you will be billed at the applicable rate.
+Billing for the Log Analytics Data Export feature is not enabled yet. View more details in [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## Export destinations
A data export rule defines the tables for which data is exported and destination
In the **Log Analytics workspace** menu in the Azure portal, select **Data Export** from the **Settings** section and click **New export rule** from the top of the middle pane.
-![export create](media/logs-data-export/export-create-1.png "Screenshot of data export entry point.")
+[![export create](media/logs-data-export/export-create-1.png "Screenshot of data export entry point.")](media/logs-data-export/export-create-1.png#lightbox)
Follow the steps, then click **Create**.
$storageAccountResourceId = 'subscriptions/subscription-id/resourceGroups/resour
New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $storageAccountResourceId ```
+Use the following command to create a data export rule to a specific event hub using PowerShell. All tables are exported to the provided event hub name and can be filtered by "Type" field to separate tables.
+
+```powershell
+$eventHubResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName
+```
+
+Use the following command to create a data export rule to an event hub using PowerShell. When specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your event hub tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide event hub name to export any number of tables, or set another rule to export the remaining tables to another event hub namespace.
+
+```powershell
+$eventHubResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $eventHubResourceId
+```
+ # [Azure CLI](#tab/azure-cli) Use the following command to create a data export rule to a storage account using CLI. A separate container is created for each table.
$storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resou
az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $storageAccountResourceId ```
-Use the following command to create a data export rule to an event hub using CLI. When specific event hub name isn't provided, a separate container is created for each table.
+Use the following command to create a data export rule to a specific event hub using CLI. All tables are exported to the provided event hub name and can be filtered by "Type" field to separate tables.
```azurecli
-$eventHubsNamespacesResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
-az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubsNamespacesResourceId
+$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
+az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubResourceId
```
-Use the following command to create a data export rule to a specific event hub using CLI. All tables are exported to the provided event hub name.
+Use the following command to create a data export rule to an event hub using CLI. When specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your event hub tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide event hub name to export any number of tables, or set another rule to export the remaining tables to another event hub namespace.
```azurecli
-$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
-az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubResourceId
+$eventHubsNamespacesResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
+az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubsNamespacesResourceId
``` # [REST](#tab/rest)
Export rules can be disabled to let you stop the export for a certain period suc
# [PowerShell](#tab/powershell)
-Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable a data export rule using PowerShell.
+Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable or update rule parameters using PowerShell.
```powershell
-Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -Enable: $false
+Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -Enable: $false
``` # [Azure CLI](#tab/azure-cli)
-Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable a data export rule using CLI.
+Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable or update rule parameters using CLI.
```azurecli az monitor log-analytics workspace data-export update --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --enable false
az monitor log-analytics workspace data-export update --resource-group resourceG
# [REST](#tab/rest)
-Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. Use the following request to disable a data export rule using the REST API. The request should use bearer token authorization.
+Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable or update rule parameters using REST API. The request should use bearer token authorization.
```rest PUT https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports/<data-export-name>?api-version=2020-08-01
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 11/10/2021 Last updated : 01/04/2022 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## December, 2021
+
+## General
+
+**Updated articles**
+
+- [What is monitored by Azure Monitor?](monitor-reference.md)
+
+## Agents
+
+**Updated articles**
+
+- [Install Log Analytics agent on Windows computers](agents/agent-windows.md)
+- [Log Analytics agent overview](agents/log-analytics-agent.md)
+
+## Alerts
+
+**New articles**
+
+- [Manage alert rules created in previous versions](alerts/alerts-manage-alerts-previous-version.md)
+
+**Updated articles**
+
+- [Create an action group with a Resource Manager template](alerts/action-groups-create-resource-manager-template.md)
+- [Troubleshoot log alerts in Azure Monitor](alerts/alerts-troubleshoot-log.md)
+- [Troubleshooting problems in Azure Monitor alerts](alerts/alerts-troubleshoot.md)
+- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)
+- [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md)
+- [Create, view, and manage metric alerts using Azure Monitor](alerts/alerts-metric.md)
+
+## Application Insights
+
+**New articles**
+
+- [Analyzing product usage with HEART](app/usage-heart.md)
+
+**Updated articles**
+
+- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)
+- [Troubleshooting guide: Azure Monitor Application Insights for Java](app/java-standalone-troubleshoot.md)
+- [Set up Azure Monitor for your Python application](app/opencensus-python.md)
+- [Click Analytics Auto-collection plugin for Application Insights JavaScript SDK](app/javascript-click-analytics-plugin.md)
+
+## Logs
+
+**New articles**
+
+- [Access the Azure Monitor Log Analytics API](logs/api/access-api.md)
+- [Set Up Authentication and Authorization for the Azure Monitor Log Analytics API](logs/api/authentication-authorization.md)
+- [Querying logs for Azure resources](logs/api/azure-resource-queries.md)
+- [Batch queries](logs/api/batch-queries.md)
+- [Caching](logs/api/cache.md)
+- [Cross workspace queries](logs/api/cross-workspace-queries.md)
+- [Azure Monitor Log Analytics API Errors](logs/api/errors.md)
+- [Azure Monitor Log Analytics API Overview](logs/api/overview.md)
+- [Prefer options](logs/api/prefer-options.md)
+- [Azure Monitor Log Analytics API request format](logs/api/request-format.md)
+- [Azure Monitor Log Analytics API response format](logs/api/response-format.md)
+- [Timeouts](logs/api/timeouts.md)
+
+**Updated articles**
+
+- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)
+- [Resource Manager template samples for Log Analytics workspaces in Azure Monitor](logs/resource-manager-workspace.md)
+
+## Virtual Machines
+
+**Updated articles**
+
+- [Enable VM insights overview](vm/vminsights-enable-overview.md)
+++ ## November, 2021 ### General
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* ***Basic*** Selecting this setting enables selective connectivity patterns and limited IP scale as mentioned in the [Considerations](#considerations) section. All the [constraints](#constraints) apply in this setting.
-## Supported regions
+### Supported regions
Azure NetApp Files standard network features are supported for the following regions: * North Central US * South Central US
+* West US 3
## Considerations
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-over-tls.md
na Previously updated : 12/09/2021 Last updated : 01/04/2022 # Configure ADDS LDAP over TLS for Azure NetApp Files
If you uploaded an invalid certificate, and you have existing AD configurations,
To resolve the error condition, upload a valid root CA certificate to your NetApp account as required by the Windows Active Directory LDAP server for LDAP authentication.
+## Disable LDAP over TLS
+
+Disabling LDAP over TLS stops encrypting LDAP queries to Active Directory (LDAP server). There are no other precautions or impact on existing ANF volumes.
+
+1. Go to the NetApp account that is used for the volume and click **Active Directory connections**. Then click **Edit** to edit the existing AD connection.
+
+2. In the **Edit Active Directory** window that appears, deselect the **LDAP over TLS** checkbox and click **Save** to disable LDAP over TLS for the volume.
++ ## Next steps * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-cli.md
The `publish` command doesn't recognize aliases that you've defined in a [bicepc
When your Bicep file uses modules that are published to a registry, the `restore` command gets copies of all the required modules from the registry. It stores those copies in a local cache. A Bicep file can only be built when the external files are available in the local cache. Typically, you don't need to run `restore` because it's called automatically by `build`.
+To restore external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#credentials-for-publishingrestoring-modules).
+ To use the restore command, you must have Bicep CLI version **0.4.1008 or later**. This command is currently only available when calling the Bicep CLI directly. It's not currently available through the Azure CLI command. To manually restore the external modules for a file, use:
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 11/16/2021 Last updated : 01/03/2022 # Add module settings in the Bicep config file
For a template spec, use:
module stgModule 'ts/CoreSpecs:storage:v1' = { ```
-## Credentials for restoring modules
+## Credentials for publishing/restoring modules
-To [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, add `cloud` and `credentialPrecedence` elements to the config file.
+To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, add `cloud` and `credentialPrecedence` elements to the config file.
```json {
To [restore](bicep-cli.md#restore) external modules to the local cache, the acco
The available credentials are:
-* AzureCLI
-* AzurePowerShell
-* Environment
-* ManagedIdentity
-* VisualStudio
-* VisualStudioCode
+- AzureCLI
+- AzurePowerShell
+- Environment
+- ManagedIdentity
+- VisualStudio
+- VisualStudioCode
## Next steps
-* [Configure your Bicep environment](bicep-config.md)
-* [Add linter settings to Bicep config](bicep-config-linter.md)
-* Learn about [modules](modules.md)
+- [Configure your Bicep environment](bicep-config.md)
+- [Add linter settings to Bicep config](bicep-config-linter.md)
+- Learn about [modules](modules.md)
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 12/13/2021 Last updated : 01/03/2022 # Create private registry for Bicep modules
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r
You can use any of the available registry SKUs for the module registry. Registry [geo-replication](../../container-registry/container-registry-geo-replication.md) provides users with a local presence or as a hot-backup.
-1. Get the login server name. You need this name when linking to the registry from your Bicep files.
+1. Get the login server name. You need this name when linking to the registry from your Bicep files. The format of the login server name is: `<registry-name>.azurecr.io`.
- # [PowerShell](#tab/azure-powershell)
+ # [PowerShell](#tab/azure-powershell)
- To get the login server name, use [Get-AzContainerRegistry](/powershell/module/az.containerregistry/get-azcontainerregistry).
+ To get the login server name, use [Get-AzContainerRegistry](/powershell/module/az.containerregistry/get-azcontainerregistry).
- ```azurepowershell
- Get-AzContainerRegistry -ResourceGroupName "<resource-group-name>" -Name "<registry-name>" | Select-Object LoginServer
- ```
+ ```azurepowershell
+ Get-AzContainerRegistry -ResourceGroupName "<resource-group-name>" -Name "<registry-name>" | Select-Object LoginServer
+ ```
- # [Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/azure-cli)
- To get the login server name, use [az acr show](/cli/azure/acr#az_acr_show).
+ To get the login server name, use [az acr show](/cli/azure/acr#az_acr_show).
- ```azurecli
- az acr show --resource-group <resource-group-name> --name <registry-name> --query loginServer
- ```
+ ```azurecli
+ az acr show --resource-group <resource-group-name> --name <registry-name> --query loginServer
+ ```
-
+
- The format of the login server name is: `<registry-name>.azurecr.io`.
+1. To publish modules to a registry, you must have permission to **push** an image. To deploy a module from a registry, you must have permission to **pull** the image. For more information about the roles that grant adequate access, see [Azure Container Registry roles and permissions](../../container-registry/container-registry-roles.md).
-- To publish modules to a registry, you must have permission to **push** an image. To deploy a module from a registry, you must have permission to **pull** the image. For more information about the roles that grant adequate access, see [Azure Container Registry roles and permissions](../../container-registry/container-registry-roles.md).--- Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#credentials-for-restoring-modules).
+1. Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#credentials-for-publishingrestoring-modules).
> [!IMPORTANT] > The private container registry is only available to users with the required access. However, it's accessed through the public internet. For more security, you can require access through a private endpoint. See [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md).
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
+
+ Title: Publish modules to private module registry
+description: Publish Bicep modules to private module registry and use the modules.
Last updated : 01/04/2022++
+#Customer intent: As a developer new to Azure deployment, I want to learn how to publish Bicep modules to private module registry.
++
+# Quickstart: Publish Bicep modules to private module registry
+
+Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md).
+
+## Prerequisites
+
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+
+To work with module registries, you must have [Bicep CLI](./install.md#deployment-environment) version **0.4.1008** or later. To use with [Azure CLI](/azure/install-azure-cli), you must also have Azure CLI version **2.31.0** or later; to use with [Azure PowerShell](/powershell/azure/install-az-ps), you must also have Azure PowerShell version **7.0.0** or later.
+
+A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). To create one, see [Quickstart: Create a container registry by using a Bicep file](../../container-registry/container-registry-get-started-bicep.md).
+
+To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+
+## Create Bicep modules
+
+A module is a Bicep file that is deployed from another Bicep file. Any Bicep file can be used as a module. You can use the following Bicep file in this quickstart. It creates a storage account:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param storagePrefix string
+
+@allowed([
+ 'Standard_LRS'
+ 'Standard_GRS'
+ 'Standard_RAGRS'
+ 'Standard_ZRS'
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GZRS'
+ 'Standard_RAGZRS'
+])
+param storageSKU string = 'Standard_LRS'
+param location string
+
+var uniqueStorageName = '${storagePrefix}${uniqueString(resourceGroup().id)}'
+
+resource stg 'Microsoft.Storage/storageAccounts@2021-06-01' = {
+ name: uniqueStorageName
+ location: location
+ sku: {
+ name: storageSKU
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+
+output storageEndpoint object = stg.properties.primaryEndpoints
+```
+
+Save the Bicep file as **storage.bicep**.
+
+## Publish modules
+
+If you don't have an Azure container registry (ACR), see [Prerequisites](#prerequisites) to create one. The login server name of the ACR is needed. The format of the login server name is: `<registry-name>.azurecr.io`. To get the login server name:
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az acr show --resource-group <resource-group-name> --name <registry-name> --query loginServer
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzContainerRegistry -ResourceGroupName "<resource-group-name>" -Name "<registry-name>" | Select-Object LoginServer
+```
+++
+Use the following syntax to publish a Bicep file as a module to a private module registry.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+```
+++
+In the preceding sample, **./storage.bicep** is the Bicep file to be published. Update the file path if needed. The module path has the following syntax:
+
+```bicep
+br:<registry-name>.azurecr.io/<file-path>:<tag>
+```
+
+- **br** is the schema name for a Bicep registry.
+- **file path** is called `repository` in Azure Container Registry. The **file path** can contain segments that are separated by the `/` character. This file path is created if it doesn't exist in the registry.
+- **tag** is used for specifying a version for the module.
+
+To verify the published modules, you can list the ACR repository:
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az acr repository list --name <registry-name> --output table
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzContainerRegistryRepository -RegistryName <registry-name>
+```
+++
+## Call modules
+
+To call a module, create a new Bicep file in Visual Studio Code. In the new Bicep file, enter the following line.
+
+```bicep
+module stgModule 'br:<registry-name>.azurecr.io/bicep/modules/storage:v1'
+```
+
+Replace **&lt;registry-name>** with your ACR registry name. It takes a short moment to restore the module to your local cache. After the module is restored, the red curly line underneath the module path will go away. At the end of the line, add **=** and a space, and then select **required-properties** as shown in the following screenshot. The module structure is automatically populated.
++
+The following example is a completed Bicep file.
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param namePrefix string
+param location string = resourceGroup().location
+
+module stgModule 'br:ace1207.azurecr.io/bicep/modules/storage:v1' = {
+ name: 'stgStorage'
+ params: {
+ location: location
+ storagePrefix: namePrefix
+ }
+}
+```
+
+Save the Bicep file locally, and then use Azure CLI or Azure PowerShell to deploy the Bicep file:
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+resourceGroupName = "{provide-a-resource-group-name}"
+templateFile="{provide-the-path-to-the-bicep-file}"
+
+az group create --name $resourceGroupName --location eastus
+
+az deployment group create --resource-group $resourceGroupName --template-file $templateFile
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$resourceGroupName = "{provide-a-resource-group-name}"
+$templateFile = "{provide-the-path-to-the-bicep-file}"
+
+New-AzResourceGroup -Name $resourceGroupName -Location eastus
+
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFile
+```
+++
+From the Azure portal, verify the storage account has been created successfully.
+
+## Clean up resources
+
+When the Azure resources are no longer needed, use the Azure CLI or Azure PowerShell module to delete the quickstart resource group.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+resourceGroupName = "{provide-the-resource-group-name}"
+
+az group delete --name $resourceGroupName
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$resourceGroupName = "{provide-the-resource-group-name}"
+
+Remove-AzResourceGroup -Name $resourceGroupName
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Bicep in Microsoft Learn](learn-bicep.md)
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
The following diagram illustrates a typical configuration of a geo-redundant clo
> [!NOTE] > See [Add managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md) for a detailed step-by-step tutorial adding a SQL Managed Instance to use failover group.
+> [!IMPORTANT]
+> If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic should go directly between the two managed instance subnets rather than be directed through the hub networks.
+ If your application uses SQL Managed Instance as the data tier, follow these general guidelines when designing for business continuity: ### <a name="creating-the-secondary-instance"></a> Create the geo-secondary managed instance
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
az sql db str-policy set \
--diffbackup-hours 24 ```
-#### [SQL Database](#tab/managed-instance)
+#### [SQL Managed Instance](#tab/managed-instance)
Use the following example to change the PITR backup retention of a **single active** database in a SQL Managed Instance.
azure-sql Automatic Tuning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automatic-tuning-overview.md
Automatic tuning for SQL Managed Instance only supports **FORCE LAST GOOD PLAN**
## Next steps -- To learn about built-in intelligence used in automatic tuning, see [Artificial Intelligence tunes Azure SQL Database](https://azure.microsoft.com/blog/artificial-intelligence-tunes-azure-sql-databases/).-- To learn how automatic tuning works under the hood, see [Automatically indexing millions of databases in Microsoft Azure SQL Database](https://www.microsoft.com/research/uploads/prod/2019/02/autoindexing_azuredb.pdf).
+- Read the blog post [Artificial Intelligence tunes Azure SQL Database](https://azure.microsoft.com/blog/artificial-intelligence-tunes-azure-sql-databases/).
+- Learn how automatic tuning works under the hood in [Automatically indexing millions of databases in Microsoft Azure SQL Database](https://www.microsoft.com/research/uploads/prod/2019/02/autoindexing_azuredb.pdf).
+- Learn how automatic tuning can proactively help you [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
azure-sql Az Cli Script Samples Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/az-cli-script-samples-content-guide.md
Previously updated : 09/17/2021 Last updated : 12/22/2021 keywords: sql database, managed instance, azure cli samples, azure cli examples, azure cli code samples, azure cli script examples # Azure CLI samples for Azure SQL Database and SQL Managed Instance
-
+ [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)] You can configure Azure SQL Database and SQL Managed Instance by using the <a href="/cli/azure">Azure CLI</a>.
You can configure Azure SQL Database and SQL Managed Instance by using the <a hr
- This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-# [Azure SQL Database](#tab/single-database)
+## [Azure SQL Database](#tab/single-database)
-The following table includes links to Azure CLI script examples to manage single and pooled databases in Azure SQL Database.
+The following table includes links to Azure CLI script examples to manage single and pooled databases in Azure SQL Database.
|Area|Description| |||
-|**Create databases in Azure SQL Database**||
-| [Create a single database and configure a firewall rule](scripts/create-and-configure-database-cli.md) | Creates an SQL Database and configures a server-level firewall rule. |
-| [Create elastic pools and move pooled databases](scripts/move-database-between-elastic-pools-cli.md) | Creates elastic pools, moves pooled databases, and changes compute sizes. |
-|**Scale databases in Azure SQL Database**||
-| [Scale a single database](scripts/monitor-and-scale-database-cli.md) | Scales a database in SQL Database to a different compute size after querying the size information for the database. |
-| [Scale an elastic pool](scripts/scale-pool-cli.md) | Scales a SQL elastic pool to a different compute size. |
-|**Configure geo-replication and failover**||
-| [Add a single database to a failover group](scripts/add-database-to-failover-group-cli.md)| Creates a database and a failover group, adds the database to the failover group, then tests failover to the secondary server. |
-| [Configure a failover group for an elastic pool](../../sql-database/scripts/sql-database-add-elastic-pool-to-failover-group-cli.md) | Creates a database, adds it to an elastic pool, adds the elastic pool to the failover group, then tests failover to the secondary server. |
-| [Configure and fail over a single database by using active geo-replication](../../sql-database/scripts/sql-database-setup-geodr-and-failover-database-cli.md)| Configures active geo-replication for a database in Azure SQL Database and fails it over to the secondary replica. |
-| [Configure and fail over a pooled database by using active geo-replication](../../sql-database/scripts/sql-database-setup-geodr-and-failover-pool-cli.md)| Configures active geo-replication for a database in an elastic pool, then fails it over to the secondary replica. |
+|**Create databases**||
+| [Create a single database](scripts/create-and-configure-database-cli.md) | Creates an SQL Database and configures a server-level firewall rule. |
+| [Create pooled databases](scripts/move-database-between-elastic-pools-cli.md) | Creates elastic pools, moves pooled databases, and changes compute sizes. |
+|**Scale databases**||
+| [Scale a single database](scripts/monitor-and-scale-database-cli.md) | Scales single database. |
+| [Scale pooled database](scripts/scale-pool-cli.md) | Scales a SQL elastic pool to a different compute size. |
+|**Configure geo-replication**||
+| [Single database](scripts/setup-geodr-failover-database-cli.md)| Configures active geo-replication for a database in Azure SQL Database and fails it over to the secondary replica. |
+| [Pooled database](scripts/setup-geodr-failover-pool-cli.md)| Configures active geo-replication for a database in an elastic pool, then fails it over to the secondary replica. |
+|**Configure failover group**||
+| [Configure failover group](scripts/setup-geodr-failover-group-cli.md) | Configures a failover group for a group of databases and failover over databases to the secondary server. |
+| [Single database](scripts/add-database-to-failover-group-cli.md)| Creates a database and a failover group, adds the database to the failover group, then tests failover to the secondary server. |
+| [Pooled database](scripts/add-elastic-pool-to-failover-group-cli.md) | Creates a database, adds it to an elastic pool, adds the elastic pool to the failover group, then tests failover to the secondary server. |
| **Auditing and threat detection** |
-| [Configure auditing and threat-detection](../../sql-database/scripts/sql-database-auditing-and-threat-detection-cli.md)| Configures auditing and threat detection policies for a database in Azure SQL Database. |
+| [Configure auditing and threat-detection](scripts/auditing-threat-detection-cli.md)| Configures auditing and threat detection policies for a database in Azure SQL Database. |
| **Back up, restore, copy, and import a database**||
-| [Back up a database](../../sql-database/scripts/sql-database-backup-database-cli.md)| Backs up a database in SQL Database to an Azure storage backup. |
-| [Restore a database](../../sql-database/scripts/sql-database-restore-database-cli.md)| Restores a database in SQL Database from a geo-redundant backup and restores a deleted database to the latest backup. |
-| [Copy a database to a new server](../../sql-database/scripts/sql-database-copy-database-to-new-server-cli.md) | Creates a copy of an existing database in SQL Database in a new server. |
-| [Import a database from a BACPAC file](../../sql-database/scripts/sql-database-import-from-bacpac-cli.md)| Imports a database to SQL Database from a BACPAC file. |
+| [Back up a database](scripts/backup-database-cli.md)| Backs up a database in SQL Database to an Azure storage backup. |
+| [Restore a database](scripts/restore-database-cli.md)| Restores a database in SQL Database to a specific point in time. |
+| [Copy a database to a new server](scripts/copy-database-to-new-server-cli.md) | Creates a copy of an existing database in SQL Database in a new server. |
+| [Import a database from a BACPAC file](scripts/import-from-bacpac-cli.md)| Imports a database to SQL Database from a BACPAC file. |
|||
-Learn more about the [single-database Azure CLI API](single-database-manage.md#the-azure-cli).
+Learn more about the [single-database Azure CLI API](single-database-manage.md#azure-cli).
-# [Azure SQL Managed Instance](#tab/managed-instance)
+## [Azure SQL Managed Instance](#tab/managed-instance)
The following table includes links to Azure CLI script examples for Azure SQL Managed Instance. |Area|Description| |||
-| **Create a SQL Managed Instance**||
-| [Create a SQL Managed Instance](../../sql-database/scripts/sql-database-create-configure-managed-instance-cli.md)| Creates a SQL Managed Instance. |
-| **Configure Transparent Data Encryption (TDE)**||
-| [Manage Transparent Data Encryption in a SQL Managed Instance by using Azure Key Vault](../../sql-database/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md)| Configures Transparent Data Encryption (TDE) in SQL Managed Instance by using Azure Key Vault with various key scenarios. |
-|**Configure a failover group**||
-| [Configure a failover group for SQL Managed Instance](../../sql-database/scripts/sql-database-add-managed-instance-to-failover-group-cli.md) | Creates two instances of SQL Managed Instance, adds them to a failover group, and then tests failover from the primary SQL Managed Instance to the secondary SQL Managed Instance. |
+| [Create SQL Managed Instance](../managed-instance/scripts/create-configure-managed-instance-cli.md)| Creates a SQL Managed Instance. |
+| [Configure Transparent Data Encryption (TDE)](../managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md)| Configures Transparent Data Encryption (TDE) in SQL Managed Instance by using Azure Key Vault with various key scenarios. |
+| [Restore geo-backup](../managed-instance/scripts/restore-geo-backup-cli.md) | Performs a geo-restore between two instanced of SQL Managed Instance to a specific point in time. |
||| For additional SQL Managed Instance examples, see the [create](/archive/blogs/sqlserverstorageengine/create-azure-sql-managed-instance-using-azure-cli), [update](/archive/blogs/sqlserverstorageengine/modify-azure-sql-database-managed-instance-using-azure-cli), [move a database](/archive/blogs/sqlserverstorageengine/cross-instance-point-in-time-restore-in-azure-sql-database-managed-instance), and [working with](https://medium.com/azure-sqldb-managed-instance/working-with-sql-managed-instance-using-azure-cli-611795fe0b44) scripts. Learn more about the [SQL Managed Instance Azure CLI API](../managed-instance/api-references-create-manage-instance.md#azure-cli-create-and-configure-managed-instances).--
azure-sql Elastic Pool Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-manage.md
To create and manage SQL Database elastic pools and pooled databases with Azure
## Azure CLI
-To create and manage SQL Database elastic pools with the [Azure CLI](/cli/azure), use the following [Azure CLI SQL Database](/cli/azure/sql/db) commands. Use the [Cloud Shell](../../cloud-shell/overview.md) to run the CLI in your browser, or [install](/cli/azure/install-azure-cli) it on macOS, Linux, or Windows.
+To create and manage SQL Database elastic pools with [Azure CLI](/cli/azure), use the following [Azure CLI SQL Database](/cli/azure/sql/db) commands. Use the [Cloud Shell](../../cloud-shell/overview.md) to run Azure CLI in your browser, or [install](/cli/azure/install-azure-cli) it on macOS, Linux, or Windows.
> [!TIP] > For Azure CLI example scripts, see [Use CLI to move a database in SQL Database in a SQL elastic pool](scripts/move-database-between-elastic-pools-cli.md) and [Use Azure CLI to scale a SQL elastic pool in Azure SQL Database](scripts/scale-pool-cli.md).
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Previously updated : 08/27/2019 Last updated : 12/10/2021 # Tutorial: Add an Azure SQL Database elastic pool to a failover group+ [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] Configure a failover group for an Azure SQL Database elastic pool and test failover using the Azure portal. In this tutorial, you will learn how to:
Configure a failover group for an Azure SQL Database elastic pool and test failo
## Prerequisites
+# [Azure portal](#tab/azure-portal)
+ To complete this tutorial, make sure you have: - An Azure subscription. [Create a free account](https://azure.microsoft.com/free/) if you don't already have one.
+# [PowerShell](#tab/azure-powershell)
+
+To complete the tutorial, make sure you have the following items:
+
+- An Azure subscription. [Create a free account](https://azure.microsoft.com/free/) if you don't already have one.
+- [Azure PowerShell](/powershell/azure/)
+
+# [Azure CLI](#tab/azure-cli)
+
+To complete the tutorial, make sure you have the following items:
+
+- An Azure subscription. [Create a free account](https://azure.microsoft.com/free/) if you don't already have one.
+- The latest version of [the Azure CLI](/cli/azure/install-azure-cli).
+++ ## 1 - Create a single database [!INCLUDE [sql-database-create-single-database](../includes/sql-database-create-single-database.md)]
To complete this tutorial, make sure you have:
In this step, you will create an elastic pool and add your database to it.
-# [Portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
Create your elastic pool using the Azure portal.
This portion of the tutorial uses the following PowerShell cmdlets:
| [New-AzSqlElasticPool](/powershell/module/az.sql/new-azsqlelasticpool) | Creates an elastic database pool for an Azure SQL Database.| | [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) | Sets properties for a database, or moves an existing database into an elastic pool. |
+# [Azure CLI](#tab/azure-cli)
+
+In this step, you create your elastic pool and add your database to the elastic pool using the Azure CLI.
+
+### Set additional parameter values to create elastic pool
+
+Set these additional parameter values for use in creating the an elastic pool.
++
+### Create elastic pool on primary server
+
+Use this script to create an elastic pool with the [az sql elastic-pool create](/cli/azure/sql/elastic-poolt#az_sql_elastic_pool_create) command.
++
+### Add database to elastic pool
+
+Use this script to add a database to an elastic pool with the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command.
++
+This portion of the tutorial uses the following Azure CLI cmdlets:
+
+| Command | Notes |
+|||
+| [az sql elastic-pool create](/cli/azure/sql/elastic-poolt#az_sql_elastic_pool_create) | Creates an elastic pool. |
+| [az sql db update](/cli/azure/sql/db#az_sql_db_update) | Updates a database|
+ ## 3 - Create the failover group In this step, you will create a [failover group](auto-failover-group-overview.md) between an existing server and a new server in another region. Then add the elastic pool to the failover group.
-# [Portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
Create your failover group using the Azure portal.
Create your failover group using the Azure portal.
![Create a secondary server for the failover group](./media/failover-group-add-elastic-pool-tutorial/create-secondary-failover-server.png)
-1. Select **Databases within the group** then select the elastic pool you created in section 2. A warning should appear, prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
+1. Select **Databases within the group** then select the elastic pool you created in section 2. A warning should appear, prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
![Add elastic pool to failover group](./media/failover-group-add-elastic-pool-tutorial/add-elastic-pool-to-failover-group.png)
This portion of the tutorial uses the following PowerShell cmdlets:
| [Add-AzSqlDatabaseToFailoverGroup](/powershell/module/az.sql/add-azsqldatabasetofailovergroup) | Adds one or more Azure SQL databases to a failover group. | | [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Gets or lists Azure SQL Database failover groups. |
+# [Azure CLI](#tab/azure-cli)
+
+In this step, you create your secondary server, failover group, elastic pool, and add a database to failover group using the Azure CLI.
+
+### Set additional parameter values to create failover group
+
+Set these additional parameter values for use in creating the failover group, in addition to the values defined in the preceding script that created the primary resource group and server.
+
+Change the failover location as appropriate for your environment.
++
+### Create secondary server
+
+Use this script to create a secondary server with the [az sql server create](/cli/azure/sql/server#az_sql_server_create) command.
+> [!NOTE]
+> The server login and firewall settings must match that of your primary server.
++
+### Create elastic pool on secondary server
+
+Use this script to create an elastic pool on the secondary server with the [az sql elastic-pool create](/cli/azure/sql/elastic-poo#az_sql_elastic_pool_create) command.
++
+### Create failover group
+
+Use this script to create a failover group with the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command.
++
+### Add database to failover group
+
+Use this script to add a database to the failover group with the command.
++
+### Azure CLI failover group creation reference
+
+This portion of the tutorial uses the following Azure CLI cmdlets:
+
+| Command | Notes |
+|||
+| [az sql server create](/cli/azure/sql/server#az_sql_server_create) | Creates a server that hosts databases and elastic pools. |
+| [az sql elastic-pool create](/cli/azure/sql/elastic-poo#az_sql_elastic_pool_create) | Creates an elastic pool.|
+| [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. |
+| [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) | Updates a failover group.|
+ ## 4 - Test failover In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure portal.
-# [Portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
Test failover of your failover group using the Azure portal.
This portion of the tutorial uses the following PowerShell cmdlets:
| [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Gets or lists Azure SQL Database failover groups. | | [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup)| Executes a failover of an Azure SQL Database failover group. |
+# [Azure CLI](#tab/azure-cli)
+
+Test failover using the Azure CLI.
+
+### Verify the roles of each server
+
+Use this script to confirm the roles of each server with the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command.
++
+### Fail over to the secondary server
+
+Use this script to failover to the secondary server and verify a successful failover with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) and [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) commands.
++
+### Revert failover group back to the primary server
+
+Use this script to fail back to the primary server with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command.
++
+### Azure CLI failover group management reference
+
+This portion of the tutorial uses the following Azure CLI cmdlets:
+
+| Command | Notes |
+|||
+| [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) | Gets the failover groups in a server. |
+| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) | Set the primary of the failover group by failing over all databases from the current primary server. |
+ ## Clean up resources Clean up resources by deleting the resource group.
-# [Portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
1. Navigate to your resource group in the [Azure portal](https://portal.azure.com). 1. Select **Delete resource group** to delete all the resources in the group, as well as the resource group itself.
This portion of the tutorial uses the following PowerShell cmdlet:
||| | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group |
+# [Azure CLI](#tab/azure-cli)
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+ ```azurecli
+ echo "Cleaning up resources by removing the resource group..."
+ az group delete --name $resourceGroup -y
+ ```
+
+This portion of the tutorial uses the following Azure CLI cmdlets:
+
+| Command | Notes |
+|||
+| [az group delete](/cli/azure/vm/extension#az_vm_extension_set) | Deletes a resource group including all nested resources. |
+ > [!IMPORTANT]
This script uses the following commands. Each command in the table links to comm
| [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup)| Executes a failover of an Azure SQL Database failover group. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group |
-# [Portal](#tab/azure-portal)
+# [Azure CLI](#tab/azure-cli)
++
+# [Azure portal](#tab/azure-portal)
There are no scripts available for the Azure portal.
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Title: "Tutorial: Add a database to a failover group"
-description: Add a database in Azure SQL Database to an autofailover group using the Azure portal, PowerShell, or the Azure CLI.
+description: Add a database in Azure SQL Database to an autofailover group using the Azure portal, PowerShell, or the Azure CLI.
Previously updated : 06/19/2019 Last updated : 12/10/2021 # Tutorial: Add an Azure SQL Database to an autofailover group+ [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] A [failover group](auto-failover-group-overview.md) is a declarative abstraction layer that allows you to group multiple geo-replicated databases. Learn to configure a failover group for an Azure SQL Database and test failover using either the Azure portal, PowerShell, or the Azure CLI. In this tutorial, you'll learn how to:
A [failover group](auto-failover-group-overview.md) is a declarative abstraction
## Prerequisites
-# [The portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
To complete this tutorial, make sure you have:
To complete the tutorial, make sure you have the following items:
- An Azure subscription. [Create a free account](https://azure.microsoft.com/free/) if you don't already have one. - [Azure PowerShell](/powershell/azure/)
-# [The Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli)
To complete the tutorial, make sure you have the following items:
To complete the tutorial, make sure you have the following items:
In this step, you' will create a [failover group](auto-failover-group-overview.md) between an existing server and a new server in another region. Then add the sample database to the failover group.
-# [The portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
Create your failover group and add your database to it using the Azure portal.
This portion of the tutorial uses the following PowerShell cmdlets:
| [Get-AzSqlDatabase](/powershell/module/az.sql/get-azsqldatabase) | Gets one or more databases in Azure SQL Database. | | [Add-AzSqlDatabaseToFailoverGroup](/powershell/module/az.sql/add-azsqldatabasetofailovergroup) | Adds one or more databases to a failover group in Azure SQL Database. |
-# [The Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli)
-Create your failover group and add your database to it using the Azure CLI.
+In this step, you create your failover group and add your database to it using the Azure CLI.
- > [!NOTE]
- > The server login and firewall settings must match that of your primary server.
+### Set additional parameter values
- ```azurecli-interactive
- #!/bin/bash
- # set variables
- $failoverLocation = "West US"
- $failoverServer = "failoverServer-$randomIdentifier"
- $failoverGroup = "failoverGroup-$randomIdentifier"
+Set these additional parameter values for use in creating the failover group, in addition to the values defined in the preceding script that created the primary resource group and server.
- echo "Creating a secondary server in the DR region..."
- az sql server create --name $failoverServer --resource-group $resourceGroup --location $failoverLocation --admin-user $login --admin-password $password
+Change the failover location as appropriate for your environment.
- echo "Creating a failover group between the two servers..."
- az sql failover-group create --name $failoverGroup --partner-server $failoverServer --resource-group $resourceGroup --server $server --add-db $database --failover-policy Automatic
- ```
+
+### Create the secondary server
+
+Use this script to create a secondary server with the [az sql server create](/cli/azure/sql/server#az_sql_server_create) command.
+> [!NOTE]
+> The server login and firewall settings must match that of your primary server.
++
+### Create the failover group
+
+Use this script to create a failover group with the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command.
++
+### Azure CLI failover group creation reference
This portion of the tutorial uses the following Azure CLI cmdlets: | Command | Notes | ||| | [az sql server create](/cli/azure/sql/server#az_sql_server_create) | Creates a server that hosts databases and elastic pools. |
-| [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) | Creates a server's firewall rules. |
| [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. |
+| [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) | Updates a failover group.|
## 3 - Test failover
-In this step, you'll fail your failover group over to the secondary server, and then fail back using the Azure portal.
+In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure portal.
-# [The portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
Test failover using the Azure portal.
This portion of the tutorial uses the following PowerShell cmdlets:
| [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Gets or lists Azure SQL Database failover groups. | | [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup)| Executes a failover of an Azure SQL Database failover group. |
-# [The Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli)
Test failover using the Azure CLI.
-Verify which server is the secondary:
+### Verify the roles of each server
- ```azurecli-interactive
- echo "Verifying which server is in the secondary role..."
- az sql failover-group list --server $server --resource-group $resourceGroup
- ```
+Use this script to confirm the roles of each server with the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command.
-Fail over to the secondary server:
- ```azurecli-interactive
- echo "Failing over group to the secondary server..."
- az sql failover-group set-primary --name $failoverGroup --resource-group $resourceGroup --server $failoverServer
- echo "Successfully failed failover group over to" $failoverServer
- ```
+### Fail over to the secondary server
-Revert failover group back to the primary server:
+Use this script to failover to the secondary server and verify a successful failover with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) and [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) commands.
- ```azurecli-interactive
- echo "Failing over group back to the primary server..."
- az sql failover-group set-primary --name $failoverGroup --resource-group $resourceGroup --server $server
- echo "Successfully failed failover group back to" $server
- ```
+
+### Revert failover group back to the primary server
+
+Use this script to fail back to the primary server with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command.
++
+### Azure CLI failover group management reference
This portion of the tutorial uses the following Azure CLI cmdlets: | Command | Notes | |||
-| [az sql failover-group list](/cli/azure/sql/failover-group#az_sql_failover_group_list) | Lists the failover groups in a server. |
+| [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) | Gets the failover groups in a server. |
| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) | Set the primary of the failover group by failing over all databases from the current primary server. |
This portion of the tutorial uses the following Azure CLI cmdlets:
Clean up resources by deleting the resource group.
-# [The portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
Delete the resource group using the Azure portal.
This portion of the tutorial uses the following PowerShell cmdlets:
||| | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group |
-# [The Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli)
-Delete the resource group by using the Azure CLI.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
- ```azurecli-interactive
+ ```azurecli
echo "Cleaning up resources by removing the resource group..."
- az group delete --name $resourceGroup
- echo "Successfully removed resource group" $resourceGroup
+ az group delete --name $resourceGroup -y
+ ``` This portion of the tutorial uses the following Azure CLI cmdlets:
This script uses the following commands. Each command in the table links to comm
# [Azure CLI](#tab/azure-cli)
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh "Add database to a failover group")]
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
| [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) | Creates the server-level IP firewall rules in Azure SQL Database. | | [az sql db create](/cli/azure/sql/db) | Creates a database in Azure SQL Database. | | [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group in Azure SQL Database. |
-| [az sql failover-group list](/cli/azure/sql/failover-group#az_sql_failover_group_list) | Lists the failover groups in a server in Azure SQL Database. |
+| [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) | Lists the failover groups in a server in Azure SQL Database. |
| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) | Set the primary of the failover group by failing over all databases from the current primary server. | | [az group delete](/cli/azure/vm/extension#az_vm_extension_set) | Deletes a resource group including all nested resources. |
-# [The portal](#tab/azure-portal)
+# [Azure portal](#tab/azure-portal)
There are no scripts available for the Azure portal.
-You can find other Azure SQL Database scripts here: [Azure PowerShell](powershell-script-content-guide.md) and [Azure CLI](az-cli-script-samples-content-guide.md).
+For additional Azure SQL Database scripts, see: [Azure PowerShell](powershell-script-content-guide.md) and [Azure CLI](az-cli-script-samples-content-guide.md).
## Next steps
In this tutorial, you added a database in Azure SQL Database to a failover group
Advance to the next tutorial on how to add your elastic pool to a failover group. > [!div class="nextstepaction"]
-> [Tutorial: Add an Azure SQL Database elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
+> [Tutorial: Add an Azure SQL Database elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
Last updated 07/01/2019
# Azure SQL Database traffic migration to newer Gateways [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-As Azure infrastructure improves, Microsoft will periodically refresh hardware to ensure we provide the best possible customer experience. In the coming months, we plan to add gateways built on newer hardware generations, migrate traffic to them, and eventually decommission gateways built on older hardware in some regions.
+Microsoft periodically refreshes hardware to optimize the customer experience. During these refreshes, Azure adds gateways built on newer hardware generations, migrates traffic to them, and eventually decommissions gateways built on older hardware in some regions.
-Customers will be notified via service health notifications well in advance of any change to gateways available in each region. Customers can [use the Azure portal to set up activity log alerts](../../service-health/alerts-activity-log-service-notifications-portal.md).
-The most up-to-date information will be maintained in the [Azure SQL Database gateway IP addresses](connectivity-architecture.md#gateway-ip-addresses) table.
+To avoid service disruptions during refreshes, allow the communication with SQL Gateway IP subnet ranges for the region. Review [SQL Gateway IP subnet ranges](connectivity-architecture.md#gateway-ip-addresses) and include the ranges for your region.
++
+Customers can [use the Azure portal to set up activity log alerts](../../service-health/alerts-activity-log-service-notifications-portal.md).
+ ## Status updates
azure-sql High Cpu Diagnose Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-cpu-diagnose-troubleshoot.md
+
+ Title: Diagnose and troubleshoot high CPU
+
+description: Learn to diagnose and troubleshoot high CPU problems in Azure SQL Database.
++++
+ms.devlang:
++++ Last updated : 12/15/2021+
+# Diagnose and troubleshoot high CPU on Azure SQL Database
++
+[Azure SQL Database](sql-database-paas-overview.md) provides built-in tools to identify the causes of high CPU usage and to optimize workload performance. You can use these tools to troubleshoot high CPU usage while it's occurring, or reactively after the incident has completed. You can also enable [automatic tuning](automatic-tuning-overview.md) to proactively reduce CPU usage over time for your database. This article teaches you to diagnose and troubleshoot high CPU with built-in tools in Azure SQL Database and explains [when to add CPU resources](#when-to-add-cpu-resources).
+
+## Understand vCore count
+
+It's helpful to understand the number of virtual cores (vCores) available to your database when diagnosing a high CPU incident. A vCore is equivalent to a logical CPU. The number of vCores helps you understand the CPU resources available to your database.
+
+### Identify vCore count in the Azure portal
+
+You can quickly identify the vCore count for a database in the Azure portal if you're using a [vCore-based service tier](service-tiers-vcore.md) with the provisioned compute tier. In this case, the **pricing tier** listed for the database on its **Overview** page will contain the vCore count. For example, a database's pricing tier might be 'General Purpose: Gen5, 16 vCores'.
+
+For databases in the [serverless](serverless-tier-overview.md) compute tier, vCore count will always be equivalent to the max vCore setting for the database. VCore count will show in the **pricing tier** listed for the database on its **Overview** page. For example, a database's pricing tier might be 'General Purpose: Serverless, Gen5, 16 vCores'.
+
+If you're using a database under the [DTU-based purchase model](service-tiers-dtu.md), you will need to use Transact-SQL to query the database's vCore count.
+
+### Identify vCore count with Transact-SQL
+
+You can identify the current vCore count for any database with Transact-SQL. You can run Transact-SQL against Azure SQL Database with [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or [the Azure portal's query editor (preview)](connect-query-portal.md).
+
+Connect to your database and run the following query:
+
+```sql
+SELECT
+ COUNT(*) as vCores
+FROM sys.dm_os_schedulers
+WHERE status = N'VISIBLE ONLINE';
+GO
+```
+
+> [!NOTE]
+> For databases using Gen4 hardware, the number of visible online schedulers in `sys.dm_os_schedulers` may be double the number of vCores specified at database creation and shown in Azure portal.
+
+## Identify the causes of high CPU
+You can measure and analyze CPU utilization using the Azure portal, Query Store interactive tools in SSMS, and Transact-SQL queries in SSMS and Azure Data Studio.
+
+The Azure portal and Query Store show execution statistics, such as CPU metrics, for completed queries. If you are experiencing a current high CPU incident that may be caused by one or more ongoing long-running queries, [identify currently running queries with Transact-SQL](#identify-currently-running-queries-with-transact-sql).
+
+Common causes of new and unusual high CPU utilization are:
+
+* New queries in the workload that use a large amount of CPU.
+* An increase in the frequency of regularly running queries.
+* Query plan regression, including regression due to [parameter sensitive plan (PSP) problems](../identify-query-performance-issues.md), resulting in one or more queries consuming more CPU.
+* A significant increase in compilation or recompilation of query plans.
+* Databases where queries use [excessive parallelism](configure-max-degree-of-parallelism.md#excessive-parallelism).
+
+To understand what is causing your high CPU incident, identify when high CPU utilization is occurring against your database and the top queries using CPU at that time.
+
+Examine:
+
+- Are new queries using significant CPU appearing in the workload, or are you seeing an increase in frequency of regularly running queries? Use any of the following methods to investigate. Look for queries with limited history (new queries), and at the frequency of execution for queries with longer history.
+ - [Review CPU metrics and related top queries in the Azure portal](#review-cpu-usage-metrics-and-related-top-queries-in-the-azure-portal)
+ - [Query the top recent 15 queries by CPU usage](#query-the-top-recent-15-queries-by-cpu-usage) with Transact-SQL.
+ - [Use interactive Query Store tools in SSMS to identify top queries by CPU time](#use-interactive-query-store-tools-to-identify-top-queries-by-cpu-time)
+- Are some queries in the workload using more CPU per execution than they did in the past? If so, has the query execution plan changed? These queries may [have parameter sensitive plan (PSP) problems](../identify-query-performance-issues.md). Use either of the following techniques to investigate. Look for queries with multiple query execution plans with significant variation in CPU usage:
+ - [Query the top recent 15 queries by CPU usage](#query-the-top-recent-15-queries-by-cpu-usage) with Transact-SQL.
+ - [Use interactive Query Store tools in SSMS to identify top queries by CPU time](#use-interactive-query-store-tools-to-identify-top-queries-by-cpu-time)
+- Is there evidence of a large amount of compilation or recompilation occurring? Query the [most frequently compiled queries by query hash](#query-the-most-frequently-compiled-queries-by-query-hash) and review how frequently they compile.
+- Are queries using excessive parallelism? Query your [MAXDOP database scoped configuration](configure-max-degree-of-parallelism.md#maxdop-database-scoped-configuration-1) and review your [vCore count](#understand-vcore-count). Excessive parallelism often occurs in databases where MAXDOP is set to 0 with a core count higher than eight.
+
+> [!Note]
+> Azure SQL Database requires compute resources to implement core service features such as high availability and disaster recovery, database backup and restore, monitoring, Query Store, automatic tuning, etc. Use of these compute resources may be particularly noticeable on databases with low vCore counts or databases in dense [elastic pools](elastic-pool-overview.md). Learn more in [Resource management in Azure SQL Database](resource-limits-logical-server.md#resource-consumption-by-user-workloads-and-internal-processes).
++
+### Review CPU usage metrics and related top queries in the Azure portal
+
+Use the Azure portal to track various CPU metrics, including the percentage of available CPU used by your database over time. The Azure portal combines CPU metrics with information from your database's Query Store, which allows you to identify which queries consumed CPU in your database at a given time.
+
+Follow these steps to find CPU percentage metrics.
+
+1. Navigate to the database in the Azure portal.
+1. Under **Intelligent Performance** in the left menu, select **Query Performance Insight**.
+
+The default view of Query Performance Insight shows 24 hours of data. CPU usage is shown as a percentage of total available CPU used for the database.
+
+The top five queries running in that period are displayed in vertical bars above the CPU usage graph. Select a band of time on the chart or use the **Customize** menu to explore specific time periods. You may also increase the number of queries shown.
++
+Select each query ID exhibiting high CPU to open details for the query. Details include query text along with performance history for the query. Examine if CPU has increased for the query recently.
+
+Take note of the query ID to further investigate the query plan using Query Store in the following section.
+### Review query plans for top queries identified in the Azure portal
+
+Follow these steps to use a query ID in SSMS's interactive Query Store tools to examine the query's execution plan over time.
+
+1. Open SSMS.
+1. Connect to your Azure SQL Database in Object Explorer.
+1. Expand the database node in Object Explorer
+1. Expand the **Query Store** folder.
+1. Open the **Tracked Queries** pane.
+1. Enter the query ID in the **Tracking query** box at the top left of the screen and press enter.
+1. If necessary, select **Configure** to adjust the time interval to match the time when high CPU utilization was occurring.
+
+The page will show the execution plan(s) and related metrics for the query over the most recent 24 hours.
+
+### Identify currently running queries with Transact-SQL
+
+Transact-SQL allows you to identify currently running queries with CPU time they have used so far. You can also use Transact-SQL to query recent CPU usage in your database, top queries by CPU, and queries that compiled the most often.
+
+You can query CPU metrics with [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or [the Azure portal's query editor (preview)](connect-query-portal.md). When using SSMS or Azure Data Studio, open a new query window and connect it to your database (not the master database).
+
+Find currently running queries with CPU usage and execution plans by executing the following query. CPU time is returned in milliseconds.
+
+```sql
+SELECT
+ req.session_id,
+ req.status,
+ req.start_time,
+ req.cpu_time AS 'cpu_time_ms',
+ req.logical_reads,
+ req.dop,
+ s.login_name,
+ s.host_name,
+ s.program_name,
+ object_name(st.objectid,st.dbid) 'ObjectName',
+ REPLACE (REPLACE (SUBSTRING (st.text,(req.statement_start_offset/2) + 1,
+ ((CASE req.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)
+ ELSE req.statement_end_offset END - req.statement_start_offset)/2) + 1),
+ CHAR(10), ' '), CHAR(13), ' ') AS statement_text,
+ qp.query_plan,
+ qsx.query_plan as query_plan_with_in_flight_statistics
+FROM sys.dm_exec_requests as req
+JOIN sys.dm_exec_sessions as s on req.session_id=s.session_id
+CROSS APPLY sys.dm_exec_sql_text(req.sql_handle) as st
+OUTER APPLY sys.dm_exec_query_plan(req.plan_handle) as qp
+OUTER APPLY sys.dm_exec_query_statistics_xml(req.session_id) as qsx
+ORDER BY req.cpu_time desc;
+GO
+```
+
+This query returns two copies of the execution plan. The column `query_plan` contains the execution plan from [sys.dm_exec_query_plan()](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-plan-transact-sql). This version of the query plan contains only estimates of row counts and does not contain any execution statistics.
+
+If the column `query_plan_with_in_flight_statistics` returns an execution plan, this plan provides more information. The `query_plan_with_in_flight_statistics` column returns data from [sys.dm_exec_query_statistics_xml()](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-statistics-xml-transact-sql), which includes "in flight" execution statistics such as the actual number of rows returned so far by a currently running query.
+
+### Review CPU usage metrics for the last hour
+
+The following query against `sys.dm_db_resource_stats` returns the average CPU usage over 15-second intervals for approximately the last hour.
+
+```sql
+SELECT
+ end_time,
+ avg_cpu_percent,
+ avg_instance_cpu_percent
+FROM sys.dm_db_resource_stats
+ORDER BY end_time DESC;
+GO
+```
+
+It is important to not focus only on the `avg_cpu_percent` column. The `avg_instance_cpu_percent` column includes CPU used by both user and internal workloads. If `avg_instance_cpu_percent` is close to 100%, CPU resources are saturated. In this case, you should troubleshoot high CPU if app throughput is insufficient or query latency is high.
+
+Learn more in [Resource management in Azure SQL Database](resource-limits-logical-server.md#resource-consumption-by-user-workloads-and-internal-processes).
+
+Review the examples in [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) for more queries.
+
+### Query the top recent 15 queries by CPU usage
+
+Query Store tracks execution statistics, including CPU usage, for queries. The following query returns the top 15 queries that have run in the last 2 hours, sorted by CPU usage. CPU time is returned in milliseconds.
+
+```sql
+WITH AggregatedCPU AS
+ (SELECT
+ q.query_hash,
+ SUM(count_executions * avg_cpu_time / 1000.0) AS total_cpu_ms,
+ SUM(count_executions * avg_cpu_time / 1000.0)/ SUM(count_executions) AS avg_cpu_ms,
+ MAX(rs.max_cpu_time / 1000.00) AS max_cpu_ms,
+ MAX(max_logical_io_reads) max_logical_reads,
+ COUNT(DISTINCT p.plan_id) AS number_of_distinct_plans,
+ COUNT(DISTINCT p.query_id) AS number_of_distinct_query_ids,
+ SUM(CASE WHEN rs.execution_type_desc='Aborted' THEN count_executions ELSE 0 END) AS aborted_execution_count,
+ SUM(CASE WHEN rs.execution_type_desc='Regular' THEN count_executions ELSE 0 END) AS regular_execution_count,
+ SUM(CASE WHEN rs.execution_type_desc='Exception' THEN count_executions ELSE 0 END) AS exception_execution_count,
+ SUM(count_executions) AS total_executions,
+ MIN(qt.query_sql_text) AS sampled_query_text
+ FROM sys.query_store_query_text AS qt
+ JOIN sys.query_store_query AS q ON qt.query_text_id=q.query_text_id
+ JOIN sys.query_store_plan AS p ON q.query_id=p.query_id
+ JOIN sys.query_store_runtime_stats AS rs ON rs.plan_id=p.plan_id
+ JOIN sys.query_store_runtime_stats_interval AS rsi ON rsi.runtime_stats_interval_id=rs.runtime_stats_interval_id
+ WHERE
+ rs.execution_type_desc IN ('Regular', 'Aborted', 'Exception') AND
+ rsi.start_time>=DATEADD(HOUR, -2, GETUTCDATE())
+ GROUP BY q.query_hash),
+OrderedCPU AS
+ (SELECT *,
+ ROW_NUMBER() OVER (ORDER BY total_cpu_ms DESC, query_hash ASC) AS RN
+ FROM AggregatedCPU)
+SELECT *
+FROM OrderedCPU AS OD
+WHERE OD.RN<=15
+ORDER BY total_cpu_ms DESC;
+GO
+```
+
+This query groups by a hashed value of the query. If you find a high value in the `number_of_distinct_query_ids` column, investigate if a frequently run query isn't properly parameterized. Non-parameterized queries may be compiled on each execution, which consumes significant CPU and [affect the performance of Query Store](/sql/relational-databases/performance/best-practice-with-the-query-store#Parameterize).
+
+To learn more about an individual query, note the query hash and use it to [Identify the CPU usage and query plan for a given query hash](#identify-the-cpu-usage-and-query-plan-for-a-given-query-hash).
+
+### Query the most frequently compiled queries by query hash
+
+Compiling a query plan is a CPU-intensive process. Azure SQL Database [cache plans in memory for reuse](/sql/relational-databases/query-processing-architecture-guide#execution-plan-caching-and-reuse). Some queries may be frequently compiled if they are not parameterized or if [RECOMPILE hints](/sql/t-sql/queries/hints-transact-sql-query) force recompilation.
+
+Query Store tracks the number of times queries are compiled. Run the following query to identify the top 20 queries in Query Store by compilation count, along with the average number of compilations per minute:
+
+```sql
+SELECT TOP (20)
+ query_hash,
+ MIN(initial_compile_start_time) as initial_compile_start_time,
+ MAX(last_compile_start_time) as last_compile_start_time,
+ CASE WHEN DATEDIFF(mi,MIN(initial_compile_start_time), MAX(last_compile_start_time)) > 0
+ THEN 1.* SUM(count_compiles) / DATEDIFF(mi,MIN(initial_compile_start_time),
+ MAX(last_compile_start_time))
+ ELSE 0
+ END as avg_compiles_minute,
+ SUM(count_compiles) as count_compiles
+FROM sys.query_store_query AS q
+GROUP BY query_hash
+ORDER BY count_compiles DESC;
+GO
+```
+
+To learn more about an individual query, note the query hash and use it to [Identify the CPU usage and query plan for a given query hash](#identify-the-cpu-usage-and-query-plan-for-a-given-query-hash).
+
+### Identify the CPU usage and query plan for a given query hash
+
+Run the following query to find the individual query ID, query text, and query execution plans for a given `query_hash`. CPU time is returned in milliseconds.
+
+Replace the value for the `@query_hash` variable with a valid `query_hash` for your workload.
+
+```sql
+declare @query_hash binary(8);
+
+SET @query_hash = 0x6557BE7936AA2E91;
+
+with query_ids as (
+ SELECT
+ q.query_hash,
+ q.query_id,
+ p.query_plan_hash,
+ SUM(qrs.count_executions) * AVG(qrs.avg_cpu_time)/1000. as total_cpu_time_ms,
+ SUM(qrs.count_executions) AS sum_executions,
+ AVG(qrs.avg_cpu_time)/1000. AS avg_cpu_time_ms
+ FROM sys.query_store_query q
+ JOIN sys.query_store_plan p on q.query_id=p.query_id
+ JOIN sys.query_store_runtime_stats qrs on p.plan_id = qrs.plan_id
+ WHERE q.query_hash = @query_hash
+ GROUP BY q.query_id, q.query_hash, p.query_plan_hash)
+SELECT qid.*,
+ qt.query_sql_text,
+ p.count_compiles,
+ TRY_CAST(p.query_plan as XML) as query_plan
+FROM query_ids as qid
+JOIN sys.query_store_query AS q ON qid.query_id=q.query_id
+JOIN sys.query_store_query_text AS qt on q.query_text_id = qt.query_text_id
+JOIN sys.query_store_plan AS p ON qid.query_id=p.query_id and qid.query_plan_hash=p.query_plan_hash
+ORDER BY total_cpu_time_ms DESC;
+GO
+```
+
+This query returns one row for each variation of an execution plan for the `query_hash` across the entire history of your Query Store. The results are sorted by total CPU time.
+
+### Use interactive Query Store tools to track historic CPU utilization
+
+If you prefer to use graphic tools, follow these steps to use the interactive Query Store tools in SSMS.
+
+1. Open SSMS and connect to your database in Object Explorer.
+1. Expand the database node in Object Explorer
+1. Expand the **Query Store** folder.
+1. Open the **Overall Resource Consumption** pane.
+
+Total CPU time for your database over the last month in milliseconds is shown in the bottom-left portion of the pane. In the default view, CPU time is aggregated by day.
++
+Select **Configure** in the top right of the pane to select a different time period. You can also change the unit of aggregation. For example, you can choose to see data for a specific date range and aggregate the data by hour.
+
+### Use interactive Query Store tools to identify top queries by CPU time
+
+Select a bar in the chart to drill in and see queries running in a specific time period. The **Top Resource Consuming Queries** pane will open. Alternately, you can open **Top Resource Consuming Queries** from the Query Store node under your database in Object Explorer directly.
++
+In the default view, the **Top Resource Consuming Queries** pane shows queries by **Duration (ms)**. Duration may sometimes be lower than CPU time: queries using parallelism may use much more CPU time than their overall duration. Duration may also be higher than CPU time if waits were significant. To see queries by CPU time, select the **Metric** drop-down at the top left of the pane and select **CPU Time(ms)**.
+
+Each bar in the top-left quadrant represents a query. Select a bar to see details for that query. The top-right quadrant of the screen shows how many execution plans are in Query Store for that query and maps them according to when they were executed and how much of your selected metric was used. Select each **Plan ID** to control which query execution plan is displayed in the bottom half of the screen.
+
+> [!NOTE]
+> For a guide to interpreting Query Store views and the shapes which appear in the Top Resource Consumers view, see [Best practices with Query Store](/sql/relational-databases/performance/best-practice-with-the-query-store#start-with-query-performance-troubleshooting)
+
+## Reduce CPU usage
+Part of your troubleshooting should include learning more about the queries identified in the previous section. You can reduce CPU usage by tuning indexes, modifying your application patterns, tuning queries, and adjusting CPU-related settings for your database.
+
+- If you found new queries using significant CPU appearing in the workload, validate that indexes have been optimized for those queries. You can [tune indexes manually](#tune-indexes-manually) or [reduce CPU usage with automatic index tuning](#reduce-cpu-usage-with-automatic-index-tuning). Evaluate if your [max degree of parallelism](#reduce-cpu-usage-by-tuning-the-max-degree-of-parallelism) setting is correct for your increased workload.
+- If you found that the overall execution count of queries is higher than it used to be, [tune indexes for your highest CPU consuming queries](#tune-indexes-manually) and consider [automatic index tuning](#reduce-cpu-usage-with-automatic-index-tuning). Evaluate if your [max degree of parallelism](#reduce-cpu-usage-by-tuning-the-max-degree-of-parallelism) setting is correct for your increased workload.
+- If you found queries in the workload with [parameter sensitive plan (PSP) problems](../identify-query-performance-issues.md), consider [automatic plan correction (force plan)](#reduce-cpu-usage-with-automatic-plan-correction-force-plan). You can also [manually force a plan in Query Store](/sql/relational-databases/system-stored-procedures/sp-query-store-force-plan-transact-sql) or tune the Transact-SQL for the query to result in a consistently high-performing query plan.
+- If you found evidence that a large amount of compilation or recompilation is occurring, [tune the queries so that they are properly parameterized or do not require recompile hints](#tune-your-application-queries-and-database-settings).
+- If you found that queries are using excessive parallelism, [tune the max degree of parallelism](#reduce-cpu-usage-by-tuning-the-max-degree-of-parallelism).
+
+Consider the following strategies in this section.
+
+### Reduce CPU usage with automatic index tuning
+
+Effective index tuning reduces CPU usage for many queries. Optimized indexes reduce the logical and physical reads for a query, which often results in the query needing to do less work.
+
+Azure SQL Database offers [automatic index management](automatic-tuning-overview.md#automatic-tuning-options) for workloads on primary replicas. Automatic index management uses machine learning to monitor your workload and optimize rowstore disk-based nonclustered indexes for your database.
+
+[Review performance recommendations](database-advisor-find-recommendations-portal.md), including index recommendations, in the Azure portal. You can apply these recommendations manually or [enable the CREATE INDEX automatic tuning option](automatic-tuning-enable.md) to create and verify the performance of new indexes in your database.
+
+### Reduce CPU usage with automatic plan correction (force plan)
+
+Another common cause of high CPU incidents is [execution plan choice regression](/sql/relational-databases/automatic-tuning/automatic-tuning#what-is-execution-plan-choice-regression). Azure SQL Database offers the [force plan](automatic-tuning-overview.md#automatic-tuning-options) automatic tuning option to identify regressions in query execution plans in workloads on primary replicas. With this automatic tuning feature enabled, Azure SQL Database will test if forcing a query execution plan results in reliable improved performance for queries with execution plan regression.
+
+If your database was created after March 2020, the **force plan** automatic tuning option was automatically enabled. If your database was created prior to this time, you may wish to [enable the force plan automatic tuning option](automatic-tuning-enable.md).
+
+### Tune indexes manually
+
+Use the methods described in [Identify the causes of high CPU](#identify-the-causes-of-high-cpu) to identify query plans for your top CPU consuming queries. These execution plans will aid you in [identifying and adding nonclustered indexes](performance-guidance.md#identifying-and-adding-missing-indexes) to speed up your queries.
+
+Each disk based [nonclustered index](/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described) in your database requires storage space and must be maintained by the SQL engine. Modify existing indexes instead of adding new indexes when possible and ensure that new indexes successfully reduce CPU usage. For an overview of nonclustered indexes, see [Nonclustered Index Design Guidelines](/sql/relational-databases/sql-server-index-design-guide#Nonclustered).
+
+For some workloads, columnstore indexes may be the best choice to reduce CPU of frequent read queries. See [Columnstore indexes - Design guidance](/sql/relational-databases/indexes/columnstore-indexes-design-guidance) for high-level recommendations on scenarios when columnstore indexes may be appropriate.
+
+### Tune your application, queries, and database settings
+
+In examining your top queries, you may find [application characteristics to tune](performance-guidance.md#application-characteristics) such as "chatty" behavior, workloads that would benefit from sharding, and suboptimal database access design. For read-heavy workloads, consider [read-only replicas to offload read-only query workloads](read-scale-out.md) and [application-tier caching](performance-guidance.md#application-tier-caching) as long-term strategies to scale out frequently read data.
+
+You may also choose to manually tune the top CPU using queries identified in your workload. Manual tuning options include rewriting Transact-SQL statements, [forcing plans](/sql/relational-databases/system-stored-procedures/sp-query-store-force-plan-transact-sql) in Query Store, and applying [query hints](/sql/t-sql/queries/hints-transact-sql-query).
+
+If you identify cases where queries sometimes use an execution plan which is not optimal for performance, review the solutions in [queries that parameter sensitive plan (PSP) problems](../identify-query-performance-issues.md)
+
+If you identify non-parameterized queries with a high number of plans, consider parameterizing these queries, making sure to fully declare parameter data types, including length and precision. This may be done by modifying the queries, creating a [plan guide to force parameterization](/sql/relational-databases/performance/specify-query-parameterization-behavior-by-using-plan-guides) of a specific query, or by enabling [forced parameterization](/sql/relational-databases/query-processing-architecture-guide#execution-plan-caching-and-reuse) at the database level.
+
+If you identify queries with high compilation rates, identify what causes the frequent compilation. The most common cause of frequent compilation is [RECOMPILE hints](/sql/t-sql/queries/hints-transact-sql-query). Whenever possible, identify when the `RECOMPILE` hint was added and what problem it was meant to solve. Investigate whether an alternate performance tuning solution can be implemented to provide consistent performance for frequently running queries without a `RECOMPILE` hint.
+
+### Reduce CPU usage by tuning the max degree of parallelism
+
+The [max degree of parallelism (MAXDOP)](configure-max-degree-of-parallelism.md#overview) setting controls intra-query parallelism in the database engine. Higher MAXDOP values generally result in more parallel threads per query, and faster query execution.
+
+In some cases, a large number of parallel queries running concurrently can slow down a workload and cause high CPU usage. Excessive parallelism is most likely to occur in databases with a large number of vCores where MAXDOP is set to a high number or to zero. When MAXDOP is set to zero, the database engine sets the number of [schedulers](/sql/relational-databases/thread-and-task-architecture-guide#sql-server-task-scheduling) to be used by parallel threads to the total number of logical cores or 64, whichever is smaller.
+
+You can identify the max degree of parallelism setting for your database with Transact-SQL. Connect to your database with SSMS or Azure Data Studio and run the following query:
+
+```sql
+SELECT
+ name,
+ value,
+ value_for_secondary,
+ is_value_default
+FROM sys.database_scoped_configurations
+WHERE name=N'MAXDOP';
+GO
+```
+
+Consider experimenting with small changes in the MAXDOP configuration at the database level, or modifying individual problematic queries to use a non-default MAXDOP using a query hint. For more information, see the examples in [configure max degree of parallelism](configure-max-degree-of-parallelism.md).
+
+## When to add CPU resources
+
+You may find that your workload's queries and indexes are properly tuned, or that performance tuning requires changes that you cannot make in the short term due to internal processes or other reasons. Adding more CPU resources may be beneficial for these databases. You can [scale database resources with minimal downtime](scale-resources.md).
+
+You can add more CPU resources to your Azure SQL Database by configuring the vCore count or the [hardware generation](service-tiers-sql-database-vcore.md#hardware-generations) for databases using the [vCore purchase model](service-tiers-sql-database-vcore.md).
+
+Under the [DTU-based purchase model](service-tiers-dtu.md), you can raise your service tier and increase the number of database transaction units (DTUs). A DTU represents a blended measure of CPU, memory, reads, and writes. One benefit of the vCore purchase model is that it allows more granular control over the hardware in use and the number of vCores. You can [migrate Azure SQL Database from the DTU-based model to the vCore-based model](migrate-dtu-to-vcore.md) to transition between purchase models.
+
+## Next steps
+
+Learn more about monitoring and performance tuning Azure SQL Database in the following articles:
+
+* [Monitoring Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views](monitoring-with-dmvs.md)
+* [SQL Server index architecture and design guide](/sql/relational-databases/sql-server-index-design-guide)
+* [Enable automatic tuning to monitor queries and improve workload performance](automatic-tuning-enable.md)
+* [Query processing architecture guide](/sql/relational-databases/query-processing-architecture-guide)
+* [Best practices with Query Store](/sql/relational-databases/performance/best-practice-with-the-query-store)
+* [Detectable types of query performance bottlenecks in Azure SQL Database](../identify-query-performance-issues.md)
azure-sql Logical Servers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/logical-servers.md
To create and manage servers, databases, and firewalls with the [Azure CLI](/cli
|[az sql server firewall-rule delete](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_delete)|Deletes a firewall rule| > [!TIP]
-> For an Azure CLI quickstart, see [Create a database in Azure SQL Database using the Azure CLI](az-cli-script-samples-content-guide.md). For Azure CLI example scripts, see [Use the CLI to create a database in Azure SQL Database and configure a firewall rule](scripts/create-and-configure-database-cli.md) and [Use the CLI to monitor and scale a database in Azure SQL Database](scripts/monitor-and-scale-database-cli.md).
+> For an Azure CLI quickstart, see [Create a database in Azure SQL Database using the Azure CLI](az-cli-script-samples-content-guide.md). For Azure CLI example scripts, see [Use the CLI to create a database in Azure SQL Database and configure a firewall rule](scripts/create-and-configure-database-cli.md) and [Use Azure CLI to monitor and scale a database in Azure SQL Database](scripts/monitor-and-scale-database-cli.md).
> ## Manage servers, databases, and firewalls using Transact-SQL
To create and manage servers, databases, and firewalls, use these REST API reque
## Next steps - To learn about migrating a SQL Server database to Azure SQL Database, see [Migrate to Azure SQL Database](migrate-to-database-from-sql-server.md).-- For information about supported features, see [Features](features-comparison.md).+
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-retention-overview.md
W=12 weeks (84 days), M=12 months (365 days), Y=10 years (3650 days), WeekOfYear
![ltr example](./media/long-term-retention-overview/ltr-example.png)
-If you modify the above policy and set W=0 (no weekly backups), the cadence of backup copies will change as shown in the above table by the highlighted dates. The storage amount needed to keep these backups would reduce accordingly.
+If you modify the above policy and set W=0 (no weekly backups), Azure only retains the monthly and yearly backups. No weekly backups are stored under the LTR policy. The storage amount needed to keep these backups reduces accordingly.
> [!IMPORTANT] > The timing of individual LTR backups is controlled by Azure. You cannot manually create an LTR backup or control the timing of the backup creation. After configuring an LTR policy, it may take up to 7 days before the first LTR backup will show up on the list of available backups.
azure-sql Maintenance Window Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window-configure.md
Last updated 03/23/2021 # Configure maintenance window (Preview)
-Configure the [maintenance window (Preview)](maintenance-window.md) for an Azure SQL database, elastic pool, or Azure SQL Managed Instance database during resource creation, or anytime after a resource is created.
+Configure the [maintenance window (Preview)](maintenance-window.md) for an Azure SQL database, elastic pool, or Azure SQL Managed Instance database during resource creation, or anytime after a resource is created.
The *System default* maintenance window is 5PM to 8AM daily (local time of the Azure region the resource is located) to avoid peak business hours interruptions. If the *System default* maintenance window is not the best time, select one of the other available maintenance windows.
The ability to change to a different maintenance window is not available for eve
> [!Important] > Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
-## Configure maintenance window during database creation
+## Configure maintenance window during database creation
# [Portal](#tab/azure-portal)
-To configure the maintenance window when you create a database, elastic pool, or managed instance, set the desired **Maintenance window** on the **Additional settings** page.
+To configure the maintenance window when you create a database, elastic pool, or managed instance, set the desired **Maintenance window** on the **Additional settings** page.
-## Set the maintenance window while creating a single database or elastic pool
+### Set the maintenance window while creating a single database or elastic pool
For step-by-step information on creating a new database or pool, see [Create an Azure SQL Database single database](single-database-create-quickstart.md). :::image type="content" source="media/maintenance-window-configure/additional-settings.png" alt-text="Create database additional settings tab"::: -
-## Set the maintenance window while creating a managed instance
+### Set the maintenance window while creating a managed instance
For step-by-step information on creating a new managed instance, see [Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md). :::image type="content" source="media/maintenance-window-configure/additional-settings-mi.png" alt-text="Create managed instance additional settings tab"::: --- # [PowerShell](#tab/azure-powershell) The following examples show how to configure the maintenance window using Azure PowerShell. You can [install Azure PowerShell](/powershell/azure/install-az-ps), or use the Azure Cloud Shell.
-## Launch Azure Cloud Shell
+### Launch Azure Cloud Shell
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/powershell](https://shell.azure.com/powershell). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+When Cloud Shell opens, verify that **PowerShell** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-## Discover available maintenance windows
+### Discover available maintenance windows
-When setting the maintenance window, each region has its own maintenance window options that correspond to the timezone for the region the database or pool is located.
+When setting the maintenance window, each region has its own maintenance window options that correspond to the timezone for the region the database or pool is located.
-### Discover SQL Database and elastic pool maintenance windows
+#### Discover SQL Database and elastic pool maintenance windows
The following example returns the available maintenance windows for the *eastus2* region using the [Get-AzMaintenancePublicConfiguration](/powershell/module/az.maintenance/get-azmaintenancepublicconfiguration) cmdlet. For databases and elastic pools, set `MaintenanceScope` to `SQLDB`.
The following example returns the available maintenance windows for the *eastus2
$configurations | ?{ $_.Location -eq $location -and $_.MaintenanceScope -eq "SQLDB"} ```
-### Discover SQL Managed Instance maintenance windows
+#### Discover SQL Managed Instance maintenance windows
The following example returns the available maintenance windows for the *eastus2* region using the [Get-AzMaintenancePublicConfiguration](/powershell/module/az.maintenance/get-azmaintenancepublicconfiguration) cmdlet. For managed instances, set `MaintenanceScope` to `SQLManagedInstance`.
The following example returns the available maintenance windows for the *eastus2
$configurations | ?{ $_.Location -eq $location -and $_.MaintenanceScope -eq "SQLManagedInstance"} ``` -
-## Set the maintenance window while creating a single database
+### Set the maintenance window while creating a single database
The following example creates a new database and sets the maintenance window using the [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) cmdlet. The `-MaintenanceConfigurationId` must be set to a valid value for your database's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows). - ```powershell-interactive # Set variables for your database $resourceGroupName = "your_resource_group_name"
The following example creates a new database and sets the maintenance window usi
$database ``` --
-## Set the maintenance window while creating an elastic pool
+### Set the maintenance window while creating an elastic pool
The following example creates a new elastic pool and sets the maintenance window using the [New-AzSqlElasticPool](/powershell/module/az.sql/new-azsqlelasticpool) cmdlet. The maintenance window is set on the elastic pool, so all databases in the pool have the pool's maintenance window schedule. The `-MaintenanceConfigurationId` must be set to a valid value for your pool's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows). - ```powershell-interactive # Set variables for your pool $resourceGroupName = "your_resource_group_name"
The following example creates a new elastic pool and sets the maintenance window
$pool ```
-## Set the maintenance window while creating a managed instance
+### Set the maintenance window while creating a managed instance
The following example creates a new managed instance and sets the maintenance window using the [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance) cmdlet. The maintenance window is set on the instance, so all databases in the instance have the instance's maintenance window schedule. For `-MaintenanceConfigurationId`, the *MaintenanceConfigName* must be a valid value for your instance's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows). - ```powershell New-AzSqlInstance -Name "your_mi_name" ` -ResourceGroupName "your_resource_group_name" `
The following example creates a new managed instance and sets the maintenance wi
# [CLI](#tab/azure-cli)
-The following examples show how to configure the maintenance window using Azure CLI. You can [install the Azure CLI](/cli/azure/install-azure-cli), or use the Azure Cloud Shell.
+The following examples show how to configure the maintenance window using Azure CLI. You can [install Azure CLI](/cli/azure/install-azure-cli), or use the Azure Cloud Shell.
+
+Configuring the maintenance window with Azure CLI is only available for SQL Managed Instance.
+
+### Launch Azure Cloud Shell
-Configuring the maintenance window with the Azure CLI is only available for SQL Managed Instance.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-## Launch Azure Cloud Shell
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/cli](https://shell.azure.com/cli). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
+### Sign in to Azure
-## Discover available maintenance windows
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Discover available maintenance windows
When setting the maintenance window, each region has its own maintenance window options that correspond to the timezone for the region the database or pool is located.
-### Discover SQL Database and elastic pool maintenance windows
+#### Discover SQL Database and elastic pool maintenance windows
The following example returns the available maintenance windows for the *eastus2* region using the [az maintenance public-configuration list ](/cli/azure/maintenance/public-configuration#az_maintenance_public_configuration_list) command. For databases and elastic pools, set `maintenanceScope` to `SQLDB`.
The following example returns the available maintenance windows for the *eastus2
az maintenance public-configuration list --query "[?location=='$location'&&contains(maintenanceScope,'SQLDB')]" ```
-### Discover SQL Managed Instance maintenance windows
+#### Discover SQL Managed Instance maintenance windows
The following example returns the available maintenance windows for the *eastus2* region using the [az maintenance public-configuration list ](/cli/azure/maintenance/public-configuration#az_maintenance_public_configuration_list) command. For managed instances, set `maintenanceScope` to `SQLManagedInstance`.
The following example returns the available maintenance windows for the *eastus2
az maintenance public-configuration list --query "[?location=='eastus2'&&contains(maintenanceScope,'SQLManagedInstance')]" ```
-## Set the maintenance window while creating a single database
+### Set the maintenance window while creating a single database
The following example creates a new database and sets the maintenance window using the [az sql db create](/cli/azure/sql/db#az_sql_db_create) command. The `--maint-config-id` (or `-m`) must be set to a valid value for your database's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows). - ```azurecli # Set variables for your database resourceGroupName="your_resource_group_name"
The following example creates a new database and sets the maintenance window usi
--maint-config-id $maintenanceConfig ```
-## Set the maintenance window while creating an elastic pool
+### Set the maintenance window while creating an elastic pool
The following example creates a new elastic pool and sets the maintenance window using the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az_sql_elastic_pool_create) cmdlet. The maintenance window is set on the elastic pool, so all databases in the pool have the pool's maintenance window schedule. The `--maint-config-id` (or `-m`) must be set to a valid value for your pool's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows). - ```azurecli # Set variables for your pool resourceGroupName="your_resource_group_name"
The following example creates a new elastic pool and sets the maintenance window
--maint-config-id $maintenanceConfig ```
-## Set the maintenance window while creating a managed instance
+### Set the maintenance window while creating a managed instance
The following example creates a new managed instance and sets the maintenance window using [az sql mi create](/cli/azure/sql/mi#az_sql_mi_create). The maintenance window is set on the instance, so all databases in the instance have the instance's maintenance window schedule. *MaintenanceConfigName* must be a valid value for your instance's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows).
The following example creates a new managed instance and sets the maintenance wi
## Configure maintenance window for existing databases - When applying a maintenance window selection to a database, a brief reconfiguration (several seconds) may be experienced in some cases as Azure applies the required changes. # [Portal](#tab/azure-portal) The following steps set the maintenance window on an existing database, elastic pool, or managed instance using the Azure portal: -
-## Set the maintenance window for an existing database or elastic pool
+### Set the maintenance window for an existing database or elastic pool
1. Navigate to the SQL database or elastic pool you want to set the maintenance window for. 1. In the **Settings** menu select **Maintenance**, then select the desired maintenance window. :::image type="content" source="media/maintenance-window-configure/maintenance.png" alt-text="SQL database Maintenance page"::: -
-## Set the maintenance window for an existing managed instance
+### Set the maintenance window for an existing managed instance
1. Navigate to the managed instance you want to set the maintenance window for. 1. In the **Settings** menu select **Maintenance**, then select the desired maintenance window. :::image type="content" source="media/maintenance-window-configure/maintenance-mi.png" alt-text="SQL managed instance Maintenance page"::: -- # [PowerShell](#tab/azure-powershell)
-## Set the maintenance window for an existing database
+### Set the maintenance window for an existing database
The following example sets the maintenance window on an existing database using the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) cmdlet. The `-MaintenanceConfigurationId` must be set to a valid value for your database's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows).
The `-MaintenanceConfigurationId` must be set to a valid value for your database
$database ```
-## Set the maintenance window on an existing elastic pool
+### Set the maintenance window on an existing elastic pool
The following example sets the maintenance window on an existing elastic pool using the [Set-AzSqlElasticPool](/powershell/module/az.sql/set-azsqlelasticpool) cmdlet. It's important to make sure that the `$maintenanceConfig` value is a valid value for your pool's region. To get valid values for a region, see [Discover available maintenance windows](#discover-available-maintenance-windows).
It's important to make sure that the `$maintenanceConfig` value is a valid value
$pool ``` --
-## Set the maintenance window on an existing managed instance
+### Set the maintenance window on an existing managed instance
The following example sets the maintenance window on an existing managed instance using the [Set-AzSqlInstance](/powershell/module/az.sql/set-azsqlinstance) cmdlet. It's important to make sure that the `$maintenanceConfig` value must be a valid value for your instance's region. To get valid values for a region, see [Discover available maintenance windows](#discover-available-maintenance-windows). - ```powershell-interactive Set-AzSqlInstance -Name "your_mi_name" ` -ResourceGroupName "your_resource_group_name" `
It's important to make sure that the `$maintenanceConfig` value must be a valid
# [CLI](#tab/azure-cli)
-The following examples show how to configure the maintenance window using Azure CLI. You can [install the Azure CLI](/cli/azure/install-azure-cli), or use the Azure Cloud Shell.
+The following examples show how to configure the maintenance window using Azure CLI. You can [install Azure CLI](/cli/azure/install-azure-cli), or use the Azure Cloud Shell.
-## Set the maintenance window for an existing database
+### Set the maintenance window for an existing database
The following example sets the maintenance window on an existing database using the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command. The `--maint-config-id` (or `-m`) must be set to a valid value for your database's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows).
The following example sets the maintenance window on an existing database using
--maint-config-id $maintenanceConfig ```
-## Set the maintenance window on an existing elastic pool
+### Set the maintenance window on an existing elastic pool
The following example sets the maintenance window on an existing elastic pool using the [az sql elastic-pool update](/cli/azure/sql/elastic-pool#az_sql_elastic_pool_update) command. It's important to make sure that the `maintenanceConfig` value is a valid value for your pool's region. To get valid values for a region, see [Discover available maintenance windows](#discover-available-maintenance-windows).
It's important to make sure that the `maintenanceConfig` value is a valid value
--maint-config-id $maintenanceConfig ```
-## Set the maintenance window on an existing managed instance
+### Set the maintenance window on an existing managed instance
The following example sets the maintenance window using [az sql mi update](/cli/azure/sql/mi#az_sql_mi_update). The maintenance window is set on the instance, so all databases in the instance have the instance's maintenance window schedule. For `-MaintenanceConfigurationId`, the *MaintenanceConfigName* must be a valid value for your instance's region. To get valid values for your region, see [Discover available maintenance windows](#discover-available-maintenance-windows).
Be sure to delete unneeded resources after you're finished with them to avoid un
1. Navigate to the SQL database or elastic pool you no longer need. 1. On the **Overview** menu, select the option to delete the resource. - # [PowerShell](#tab/azure-powershell) ```powershell-interactive
azure-sql Monitoring With Dmvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitoring-with-dmvs.md
ORDER BY total_cpu_millisec DESC;
Once you identify the problematic queries, it's time to tune those queries to reduce CPU utilization. If you don't have time to tune the queries, you may also choose to upgrade the SLO of the database to work around the issue.
+For Azure SQL Database users, learn more about handling CPU performance problems in [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
+ ## Identify IO performance issues When identifying IO performance issues, the top wait types associated with IO issues are:
ORDER BY highest_cpu_queries.total_worker_time DESC;
## See also
-[Introduction to Azure SQL Database and Azure SQL Managed Instance](sql-database-paas-overview.md)
+- [Introduction to Azure SQL Database and Azure SQL Managed Instance](sql-database-paas-overview.md)
+- [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
+- [Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance](performance-guidance.md)
azure-sql Performance Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/performance-guidance.md
To learn more about the script and get started, visit the [wiki](https://aka.ms/
## Next steps -- For more information about DTU-based service tiers, see [DTU-based purchasing model](service-tiers-dtu.md).-- For more information about vCore-based service tiers, see [vCore-based purchasing model](service-tiers-vcore.md).-- For more information about elastic pools, see [What is an Azure elastic pool?](elastic-pool-overview.md)-- For information about performance and elastic pools, see [When to consider an elastic pool](elastic-pool-overview.md)
+- Learn about the [DTU-based purchasing model](service-tiers-dtu.md).
+- Learn more about the [vCore-based purchasing model](service-tiers-vcore.md).
+- Read [What is an Azure elastic pool?](elastic-pool-overview.md)
+- Discover [When to consider an elastic pool](elastic-pool-overview.md)
+- Read about [Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views](monitoring-with-dmvs.md)
+- Learn to [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
azure-sql Resource Limits Dtu Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-single-databases.md
Last updated 04/16/2021 # Resource limits for single databases using the DTU purchasing model - Azure SQL Database+ [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] This article provides the detailed resource limits for Azure SQL Database single databases using the DTU purchasing model.
The following tables show the resources available for a single database at each
* [Transact-SQL](single-database-manage.md#transact-sql-t-sql) via [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql#overview-sql-database) * [Azure portal](single-database-manage.md#the-azure-portal) * [PowerShell](single-database-manage.md#powershell)
-* [Azure CLI](single-database-manage.md#the-azure-cli)
+* [Azure CLI](single-database-manage.md#azure-cli)
* [REST API](single-database-manage.md#rest-api) > [!IMPORTANT]
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
You can set the service tier, compute size (service objective), and storage amou
* [Transact-SQL](single-database-manage.md#transact-sql-t-sql) via [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql#overview-sql-database) * [Azure portal](single-database-manage.md#the-azure-portal) * [PowerShell](single-database-manage.md#powershell)
-* [Azure CLI](single-database-manage.md#the-azure-cli)
+* [Azure CLI](single-database-manage.md#azure-cli)
* [REST API](single-database-manage.md#rest-api) > [!IMPORTANT]
azure-sql Add Database To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-database-to-failover-group-cli.md
Title: "The Azure CLI: Add a database to a failover group"
-description: Use the Azure CLI example script to create a database in Azure SQL Database, add it to an auto-failover group, and test failover.
+ Title: "Azure CLI example: Add a database to a failover group"
+description: Use this Azure CLI example script to create a database in Azure SQL Database, add it to an auto-failover group, and test failover.
Previously updated : 07/16/2019 Last updated : 12/23/2021
-# Use the Azure CLI to add a database to a failover group
+
+# Use Azure CLI to add a database to a failover group
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)] This Azure CLI script example creates a database in Azure SQL Database, creates a failover group, adds the database to it, and tests failover.
-If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+If you choose to install and use Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
## Sample script
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+ ### Sign in to Azure
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
+subscription="<subscriptionId>" # add subscription here
az account set -s $subscription # ...or use 'az login' ```
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+ ### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh "Add Azure SQL Database to failover group")]
-### Clean up deployment
+### Clean up resources
-Use the following command to remove the resource group and all resources associated with it.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resource
+```azurecli
+az group delete --name $resourceGroup
``` ## Sample reference
This script uses the following commands. Each command in the table links to comm
## Next steps
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-elastic-pool-to-failover-group-cli.md
+
+ Title: "Azure CLI example: Failover group - Azure SQL Database elastic pool"
+description: Use this Azure CLI example script to create an Azure SQL Database elastic pool, add it to a failover group, and test failover.
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to add an Azure SQL Database elastic pool to a failover group
+
+This Azure CLI script example creates a single database, adds it to an elastic pool, creates a failover group, and tests failover.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql elastic-pool](/cli/azure/sql/elastic-pool) | Elastic pool commands. |
+| [az sql failover-group](/cli/azure/sql/failover-group) | Failover group commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Auditing Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/auditing-threat-detection-cli.md
+
+ Title: "Azure CLI example: Auditing and Advanced Threat Protection in Azure SQL Database"
+description: Use this Azure CLI example script to configure auditing and Advanced Threat Protection in an Azure SQL Database
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to configure SQL Database auditing and Advanced Threat Protection
+
+This Azure CLI script example configures SQL Database auditing and Advanced Threat Protection.
+
+If you choose to install and use Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql db audit-policy](/cli/azure/sql/db/audit-policy) | Sets the auditing policy for a database. |
+| [az sql db threat-policy](/cli/azure/sql/db/threat-policy) | Sets an Advanced Threat Protection policy on a database. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/backup-database-cli.md
+
+ Title: "Azure CLI example: Backup a database in Azure SQL Database"
+description: Use this Azure CLI example script to backup an Azure SQL single database to an Azure storage container
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to backup an Azure SQL single database to an Azure storage container
+
+This Azure CLI example backs up a database in SQL Database to an Azure storage container.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az sql server](/cli/azure/sql/server) | Server commands. |
+| [az sql db](/cli/azure/sql/db) | Database commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/copy-database-to-new-server-cli.md
+
+ Title: "Azure CLI example: Copy database in Azure SQL Database to new server"
+description: Use this Azure CLI example script to copy a database in Azure SQL Database to a new server
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to copy a database in Azure SQL Database to a new server
+
+This Azure CLI script example creates a copy of an existing database in a new server.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $targetResourceGroup
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql db copy](/cli/azure/sql/db#az_sql_db_copy) | Creates a copy of a database that uses the snapshot at the current time. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Create And Configure Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/create-and-configure-database-cli.md
Title: "The Azure CLI: Create a single database"
+ Title: "Azure CLI example: Create a single database"
description: Use this Azure CLI example script to create a single database.
Previously updated : 06/25/2019 Last updated : 12/23/2021
-# Use the Azure CLI to create a single database and configure a firewall rule
+# Use Azure CLI to create a single database and configure a firewall rule
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)] This Azure CLI script example creates a single database in Azure SQL Database and configures a server-level firewall rule. After the script has been successfully run, the database can be accessed from all Azure services and the configured IP address.
-If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+If you choose to install and use Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
## Sample script
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+ ### Sign in to Azure
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/sql-database/create-and-configure-database/create-and-configure-database.sh "Create SQL Database")]
-### Clean up deployment
+### Clean up resources
-Use the following command to remove the resource group and all resources associated with it.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resource
+```azurecli
+az group delete --name $resourceGroup
``` ## Sample reference
This script uses the following commands. Each command in the table links to comm
## Next steps
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
+
+ Title: "Azure CLI example: Import BACPAC file to database in Azure SQL Database"
+description: Use this Azure CLI example script to import a BACPAC file into a database in Azure SQL Database
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to import a BACPAC file into a database in SQL Database
+
+This Azure CLI script example imports a database from a *.bacpac* file into a database in SQL Database.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Sign in to Azure
+
+For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql server](/cli/azure/sql/server) | Server commands. |
+| [az sql db import](/cli/azure/sql/db#az_sql_db_import) | Database import command. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Monitor And Scale Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/monitor-and-scale-database-cli.md
Title: "Azure CLI: Monitor and scale a single database in Azure SQL Database"
+ Title: "Azure CLI example: Monitor and scale a single database in Azure SQL Database"
description: Use an Azure CLI example script to monitor and scale a single database in Azure SQL Database.
Previously updated : 06/25/2019 Last updated : 12/23/2021 # Use the Azure CLI to monitor and scale a single database in Azure SQL Database
This Azure CLI script example scales a single database in Azure SQL Database to
If you choose to install and use the Azure CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+ ## Sample script
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+ ### Sign in to Azure
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
+subscription="<subscriptionId>" # add subscription here
az account set -s $subscription # ...or use 'az login' ```
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+ ### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/sql-database/monitor-and-scale-database/monitor-and-scale-database.sh "Monitor and scale a database in Azure SQL Database")]
> [!TIP] > Use [az sql db op list](/cli/azure/sql/db/op?#az_sql_db_op_list) to get a list of operations performed on the database, and use [az sql db op cancel](/cli/azure/sql/db/op#az_sql_db_op_cancel) to cancel an update operation on the database.
-### Clean up deployment
+### Clean up resources
-Use the following command to remove the resource group and all resources associated with it.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resource
+```azurecli
+az group delete --name $resourceGroup
``` ## Sample reference
azure-sql Move Database Between Elastic Pools Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/move-database-between-elastic-pools-cli.md
Title: "The Azure CLI: Move a database between elastic pools"
-description: Use an Azure CLI example script to create two elastic pools and move a database in SQL Database from one elastic pool to another.
+ Title: "Azure CLI example: Move a database between elastic pools"
+description: Use this Azure CLI example script to create two elastic pools and move a database in SQL Database from one elastic pool to another.
Previously updated : 06/25/2019 Last updated : 12/23/2021
-# Use the Azure CLI to move a database in SQL Database in a SQL elastic pool
+
+# Use Azure CLI to move a database in SQL Database in a SQL elastic pool
This Azure CLI script example creates two elastic pools, moves a pooled database in SQL Database from one SQL elastic pool into another SQL elastic pool, and then moves the pooled database out of the SQL elastic pool to be a single database in SQL Database.
-If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+If you choose to install and Azure CLI locally, this topic requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
## Sample script
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+ ### Sign in to Azure
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/sql-database/move-database-between-pools/move-database-between-pools.sh "Move database between pools")]
-### Clean up deployment
+### Clean up resources
-Use the following command to remove the resource group and all resources associated with it.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resource
+```azurecli
+az group delete --name $resourceGroup
``` ## Sample reference
This script uses the following commands. Each command in the table links to comm
## Next steps
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../az-cli-script-samples-content-guide.md).
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-cli.md
+
+ Title: "Azure CLI example: Restore a backup"
+description: Use this Azure CLI example script to restore a database in Azure SQL Database to an earlier point in time from automatic backups.
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to restore a single database in Azure SQL Database to an earlier point in time
+
+This Azure CLI example restores a single database in Azure SQL Database to a specific point in time.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Sign in to Azure
+
+For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql db restore](/cli/azure/sql/db#az_sql_db_restore) | Restore database command. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-sql Scale Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/scale-pool-cli.md
Title: "The Azure CLI: Scale an elastic pool"
+ Title: "Azure CLI example: Scale an elastic pool"
description: Use an Azure CLI example script to scale an elastic pool in Azure SQL Database.
Previously updated : 06/25/2019 Last updated : 12/23/2021 # Use the Azure CLI to scale an elastic pool in Azure SQL Database
This Azure CLI script example creates elastic pools in Azure SQL Database, moves
If you choose to install and use the Azure CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+ ## Sample script
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+ ### Sign in to Azure
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/sql-database/scale-pool/scale-pool.sh "Move database between pools")]
-### Clean up deployment
+### Clean up resources
-Use the following command to remove the resource group and all resources associated with it.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resource
+```azurecli
+az group delete --name $resourceGroup
``` ## Sample reference
azure-sql Setup Geodr And Failover Elastic Pool Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-and-failover-elastic-pool-powershell.md
Last updated 03/12/2019 # Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database+ [!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)] This Azure PowerShell script example configures active geo-replication for a pooled database in Azure SQL Database and fails it over to the secondary replica of the database.
azure-sql Setup Geodr Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-database-cli.md
+
+ Title: "Azure CLI example: Active geo-replication-single Azure SQL Database"
+description: Use this Azure CLI example script to set up active geo-replication for a single database in Azure SQL Database and fail it over.
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to configure active geo-replication for a single database in Azure SQL Database
+
+This Azure CLI script example configures active geo-replication for a single database and fails it over to a secondary replica of the database.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql db replica](/cli/azure/sql/db/replica) | Database replica commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
+
+ Title: "Azure CLI example: Configure a failover group for a group of databases in Azure SQL Database"
+description: Use this Azure CLI example script to set up a failover group for a set of databases in Azure SQL Database and fail it over.
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to configure a failover group for a group of databases in Azure SQL Database
+
+This Azure CLI script example configures a failover group for a group of databases in Azure SQL Database and fails it over to a secondary Azure SQL Database.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $failoverResourceGroup -y
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. |
+| [az sql failover-group set-primary](/cli/azure/sql/failover-groupt#az_sql_failover_group_set_primary) | Set the primary of the failover group by failing over all databases from the current primary server |
+| [az sql failover-group show](/cli/azure/sql/failover-group) | Gets a failover group |
+| [az sql failover-group delete](/cli/azure/sql/failover-group) | Deletes a failover group |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-sql Setup Geodr Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-pool-cli.md
+
+ Title: "Azure CLI example: Configure active geo-replication for an elastic pool"
+description: Use this Azure CLI example script to set up active geo-replication for a pooled database in Azure SQL Database and fail it over.
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
+
+This Azure CLI script example configures active geo-replication for a pooled database in Azure SQL Database and fails it over to the secondary replica of the database.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Sign in to Azure
+
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+az group delete --name $secondaryResourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql elastic-pool](/cli/azure/sql/elastic-pool) | Elastic pool commands |
+| [az sql db replica](/cli/azure/sql/db/replica) | Database replication commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Previously updated : 01/27/2021 Last updated : 12/09/2021 # Quickstart: Create an Azure SQL Database single database
In this quickstart, you create a [single database](single-database-overview.md)
> [!div class="nextstepaction"] > [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021) - ## Prerequisites - An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
To create a single database in the Azure portal, this quickstart starts at the A
1. Leave **Want to use SQL elastic pool** set to **No**. 1. Under **Compute + storage**, select **Configure database**.
-1. This quickstart uses a serverless database, so select **Serverless**, and then select **Apply**.
+1. This quickstart uses a serverless database, so select **Serverless**, and then select **Apply**.
![configure serverless database](./media/single-database-create-quickstart/configure-database.png)
To create a single database in the Azure portal, this quickstart starts at the A
![Networking tab](./media/single-database-create-quickstart/networking.png) - 1. On the **Additional settings** tab, in the **Data source** section, for **Use existing data**, select **Sample**. This creates an AdventureWorksLT sample database so there's some tables and data to query and experiment with, as opposed to an empty blank database. 1. Optionally, enable [Microsoft Defender for SQL](../database/azure-defender-for-sql.md). 1. Optionally, set the [maintenance window](../database/maintenance-window.md) so planned maintenance is performed at the best time for your database.
To create a single database in the Azure portal, this quickstart starts at the A
# [Azure CLI](#tab/azure-cli)
-## Launch Azure Cloud Shell
+You can create an Azure resource group, server, and single database using the Azure command-line interface (Azure CLI). If you don't want to use the Azure Cloud Shell, [install Azure CLI](/cli/azure/install-azure-cli) on your computer.
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+### Launch Azure Cloud Shell
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-## Set parameter values
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name. Replace the 0.0.0.0 values in the ip address range to match your specific environment.
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-```azurecli-interactive
-# Set the resource group name and location for your server
-resourceGroupName=myResourceGroup
-location=eastus
+### Sign in to Azure
-# Set an admin login and password for your database
-adminlogin=azureuser
-password=Azure1234567!
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-# Set a server name that is unique to Azure DNS (<server_name>.database.windows.net)
-serverName=server-$RANDOM
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
-# Set the ip address range that can access your database
-startip=0.0.0.0
-endip=0.0.0.0
+az account set -s $subscription # ...or use 'az login'
```
-## Create a resource group
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+### Set parameter values
-```azurecli-interactive
-az group create --name $resourceGroupName --location $location
-```
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
-## Create a server
+Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
-Create a server with the [az sql server create](/cli/azure/sql/server) command.
-```azurecli-interactive
-az sql server create \
- --name $serverName \
- --resource-group $resourceGroupName \
- --location $location \
- --admin-user $adminlogin \
- --admin-password $password
-```
+### Create a resource group
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
-## Configure a firewall rule for the server
+### Create a server
-Create a firewall rule with the [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) command.
+Create a server with the [az sql server create](/cli/azure/sql/server) command.
-```azurecli-interactive
-az sql server firewall-rule create \
- --resource-group $resourceGroupName \
- --server $serverName \
- -n AllowYourIp \
- --start-ip-address $startip \
- --end-ip-address $endip
-```
+### Configure a server-based firewall rule
-## Create a single database with Azure CLI
+Create a firewall rule with the [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) command.
-Create a database with the [az sql db create](/cli/azure/sql/db) command.
+### Create a single database
-```azurecli-interactive
+Create a database with the [az sql db create](/cli/azure/sql/db) command in the [serverless compute tier](serverless-tier-overview.md).
+
+```azurecli
az sql db create \
- --resource-group $resourceGroupName \
- --server $serverName \
- --name mySampleDatabase \
+ --resource-group $resourceGroup \
+ --server $server \
+ --name $database \
--sample-name AdventureWorksLT \ --edition GeneralPurpose \ --compute-model Serverless \
az sql db create \
# [Azure CLI (sql up)](#tab/azure-cli-sql-up)
-## Use Azure Cloud Shell
+You can create an Azure resource group, server, and single database using the Azure command-line interface (Azure CLI). If you don't want to use the Azure Cloud Shell, [install Azure CLI](/cli/azure/install-azure-cli) on your computer.
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+The following Azure CLI code blocks create a resource group, server, single database, and server-level IP firewall rule for access to the server. Make sure to record the generated resource group and server names, so you can manage these resources later.
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+### Launch Azure Cloud Shell
-## Set parameter values
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name. Replace the 0.0.0.0 values in the ip address range to match your specific environment.
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-```azurecli-interactive
-# Set the resource group name and location for your server
-resourceGroupName=myResourceGroup
-location=eastus
+When Cloud Shell opens, verify that **Bash** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
-# Set an admin login and password for your database
-adminlogin=azureuser
-password=Azure1234567!
+### Sign in to Azure
-# Set a server name that is unique to Azure DNS (<server_name>.database.windows.net)
-serverName=server-$RANDOM
+Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to sign in using a different subscription, replacing `<Subscription ID>` with your Azure Subscription ID. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
-# Set the ip address range that can access your database
-startip=0.0.0.0
-endip=0.0.0.0
+az account set -s $subscription # ...or use 'az login'
```
-## Create a database and resources
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
-The [az sql up](/cli/azure/sql#az_sql_up) command simplifies the database creation process. With it, you can create a database and all of its associated resources with a single command. This includes the resource group, server name, server location, database name, and login information. The database is created with a default pricing tier of General Purpose, Provisioned, Gen5, 2 vCores.
+### Set parameter values
+
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
+
+Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment.
++
+> [!NOTE]
+> [az sql up](/cli/azure/sql#az_sql_up) is currently in preview and does not currently support the serverless compute tier. Also, the use of non-alphabetic and non-numeric characters in the database name are not currently supported.
+
+### Create a database and resources
+
+The [az sql up](/cli/azure/sql#az_sql_up) command simplifies the database creation process. With it, you can create a database and all of its associated resources with a single command. This includes the resource group, server name, server location, database name, and login information. The database is created with a default pricing tier of General Purpose, Provisioned, Gen5, 2 vCores.
This command creates and configures a [logical server](logical-servers.md) for Azure SQL Database for immediate use. For more granular resource control during database creation, use the standard Azure CLI commands in this article. > [!NOTE]
-> When running the `az sql up` command for the first time, the Azure CLI prompts you to install the `db-up` extension. This extension is currently in preview. Accept the installation to continue. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+> When running the `az sql up` command for the first time, Azure CLI prompts you to install the `db-up` extension. This extension is currently in preview. Accept the installation to continue. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
1. Run the `az sql up` command. If any required parameters aren't used, like `--server-name`, that resource is created with a random name and login information assigned to it.
- ```azurecli-interactive
+ ```azurecli
az sql up \
- --resource-group $resourceGroupName \
+ --resource-group $resourceGroup \
--location $location \
- --server-name $serverName \
- --database-name mySampleDatabase \
- --admin-user $adminlogin \
+ --server-name $server \
+ --database-name $database \\
+ --admin-user $login \
--admin-password $password+ ```
-2. A server firewall rule is automatically created. If the server declines your IP address, create a new firewall rule using the `az sql server firewall-rule create` command.
+2. A server firewall rule is automatically created. If the server declines your IP address, create a new firewall rule using the `az sql server firewall-rule create` command and specifying appropriate start and end IP addresses.
- ```azurecli-interactive
+ ```azurecli
+ startIp=0.0.0.0
+ endIp=0.0.0.0
az sql server firewall-rule create \
- --resource-group $resourceGroupName \
- --server $serverName \
+ --resource-group $resourceGroup \
+ --server $server \
-n AllowYourIp \
- --start-ip-address $startip \
- --end-ip-address $endip
+ --start-ip-address $startIp \
+ --end-ip-address $endIp
+ ``` 3. All required resources are created, and the database is ready for queries.
This command creates and configures a [logical server](logical-servers.md) for A
You can create a resource group, server, and single database using Windows PowerShell.
-## Launch Azure Cloud Shell
+### Launch Azure Cloud Shell
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
-## Set parameter values
+When Cloud Shell opens, verify that **PowerShell** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Set parameter values
The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the Get-Random cmdlet is used to create the server name. Replace the 0.0.0.0 values in the ip address range to match your specific environment.
The following values are used in subsequent commands to create the database and
Write-host "Server name is" $serverName ``` -
-## Create resource group
+### Create resource group
Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed.
Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.
$resourceGroup ``` -
-## Create a server
+### Create a server
Create a server with the [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver) cmdlet.
Create a server with the [New-AzSqlServer](/powershell/module/az.sql/new-azsqlse
$server ```
-## Create a firewall rule
+### Create a firewall rule
Create a server firewall rule with the [New-AzSqlServerFirewallRule](/powershell/module/az.sql/new-azsqlserverfirewallrule) cmdlet.
Create a server firewall rule with the [New-AzSqlServerFirewallRule](/powershell
$serverFirewallRule ``` -
-## Create a single database with PowerShell
+### Create a single database with PowerShell
Create a single database with the [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) cmdlet.
Create a single database with the [New-AzSqlDatabase](/powershell/module/az.sql/
-- ## Query the database Once your database is created, you can use the **Query editor (preview)** in the Azure portal to connect to the database and query data.
Keep the resource group, server, and single database to go on to the next steps,
When you're finished using these resources, you can delete the resource group you created, which will also delete the server and single database within it.
-### [Portal](#tab/azure-portal)
+# [Portal](#tab/azure-portal)
To delete **myResourceGroup** and all its resources using the Azure portal:
To delete **myResourceGroup** and all its resources using the Azure portal:
1. On the resource group page, select **Delete resource group**. 1. Under **Type the resource group name**, enter *myResourceGroup*, and then select **Delete**.
-### [Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli)
-To delete the resource group and all its resources, run the following Azure CLI command, using the name of your resource group:
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+```azurecli
+az group delete --name $resourceGroup
```
-### [Azure CLI (sql up)](#tab/azure-cli-sql-up)
+# [Azure CLI (sql up)](#tab/azure-cli-sql-up)
-To delete the resource group and all its resources, run the following Azure CLI command, using the name of your resource group:
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+```azurecli
+az group delete --name $resourceGroup
```
-### [PowerShell](#tab/azure-powershell)
+# [PowerShell](#tab/azure-powershell)
To delete the resource group and all its resources, run the following PowerShell cmdlet, using the name of your resource group:
azure-sql Single Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-manage.md
To create and manage servers, single and pooled databases, and server-level fire
|[Remove-AzSqlServerFirewallRule](/powershell/module/az.sql/remove-azsqlserverfirewallrule)|Deletes a firewall rule from a server.| | New-AzSqlServerVirtualNetworkRule | Creates a [*virtual network rule*](vnet-service-endpoint-rule-overview.md), based on a subnet that is a Virtual Network service endpoint. |
-## The Azure CLI
+## Azure CLI
-To create and manage the servers, databases, and firewalls with [the Azure CLI](/cli/azure), use the following [Azure CLI](/cli/azure/sql/db) commands. Use the [Cloud Shell](../../cloud-shell/overview.md) to run the CLI in your browser, or [install](/cli/azure/install-azure-cli) it on macOS, Linux, or Windows. For creating and managing elastic pools, see [Elastic pools](elastic-pool-overview.md).
+To create and manage the servers, databases, and firewalls with [Azure CLI](/cli/azure), use the following [Azure CLI](/cli/azure/sql/db) commands. Use the [Cloud Shell](../../cloud-shell/overview.md) to run Azure CLI in your browser, or [install](/cli/azure/install-azure-cli) it on macOS, Linux, or Windows. For creating and managing elastic pools, see [Elastic pools](elastic-pool-overview.md).
> [!TIP]
-> For an Azure CLI quickstart, see [Create a single Azure SQL Database using the Azure CLI](az-cli-script-samples-content-guide.md). For Azure CLI example scripts, see [Use CLI to create a database in Azure SQL Database and configure a SQL Database firewall rule](scripts/create-and-configure-database-cli.md) and [Use CLI to monitor and scale a database in Azure SQL Database](scripts/monitor-and-scale-database-cli.md).
+> For an Azure CLI quickstart, see [Create a single Azure SQL Database using Azure CLI](az-cli-script-samples-content-guide.md). For Azure CLI example scripts, see [Use CLI to create a database in Azure SQL Database and configure a SQL Database firewall rule](scripts/create-and-configure-database-cli.md) and [Use CLI to monitor and scale a database in Azure SQL Database](scripts/monitor-and-scale-database-cli.md).
> | Cmdlet | Description |
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
Title: Enable SQL TDE with Azure Key Vault
-description: "Learn how to configure an Azure SQL Database and Azure Synapse Analytics to start using Transparent Data Encryption (TDE) for encryption-at-rest using PowerShell or the Azure CLI."
+description: "Learn how to configure an Azure SQL Database and Azure Synapse Analytics to start using Transparent Data Encryption (TDE) for encryption-at-rest using PowerShell or Azure CLI."
Last updated 06/23/2021
-# PowerShell and the Azure CLI: Enable Transparent Data Encryption with customer-managed key from Azure Key Vault
+# PowerShell and Azure CLI: Enable Transparent Data Encryption with customer-managed key from Azure Key Vault
+ [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
-This article walks through how to use a key from Azure Key Vault for Transparent Data Encryption (TDE) on Azure SQL Database or Azure Synapse Analytics. To learn more about the TDE with Azure Key Vault integration - Bring Your Own Key (BYOK) Support, visit [TDE with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
+This article walks through how to use a key from Azure Key Vault for Transparent Data Encryption (TDE) on Azure SQL Database or Azure Synapse Analytics. To learn more about the TDE with Azure Key Vault integration - Bring Your Own Key (BYOK) Support, visit [TDE with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
> [!NOTE] > Azure SQL now supports using a RSA key stored in a Managed HSM as TDE Protector. Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. Learn more about [Managed HSMs](../../key-vault/managed-hsm/index.yml).
Use the [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvau
Set-AzKeyVaultAccessPolicy -VaultName <KeyVaultName> ` -ObjectId $server.Identity.PrincipalId -PermissionsToKeys get, wrapKey, unwrapKey ```+ For adding permissions to your server on a Managed HSM, add the 'Managed HSM Crypto Service Encryption User' local RBAC role to the server. This will enable the server to perform get, wrap key, unwrap key operations on the keys in the Managed HSM. [Instructions for provisioning server access on Managed HSM](../../key-vault/managed-hsm/role-management.md)
Get-AzSqlDatabaseTransparentDataEncryptionActivity -ResourceGroupName <SQLDataba
# [The Azure CLI](#tab/azure-cli)
-To install the required version of the Azure CLI (version 2.0 or later) and connect to your Azure subscription, see [Install and Configure the Azure Cross-Platform Command-Line Interface 2.0](/cli/azure/install-azure-cli).
+To install the required version of Azure CLI (version 2.0 or later) and connect to your Azure subscription, see [Install and Configure the Azure Cross-Platform Command-Line Interface 2.0](/cli/azure/install-azure-cli).
-For specifics on Key Vault, see [Manage Key Vault using the CLI 2.0](../../key-vault/general/manage-with-cli2.md) and [How to use Key Vault soft-delete with the CLI](../../key-vault/general/key-vault-recovery.md).
+For specifics on Key Vault, see [Manage Key Vault using Azure CLI 2.0](../../key-vault/general/manage-with-cli2.md) and [How to use Key Vault soft-delete with the CLI](../../key-vault/general/key-vault-recovery.md).
## Assign an Azure AD identity to your server
az sql db tde show --database <dbname> --server <servername> --resource-group <r
Remove-AzSqlServerKeyVaultKey -KeyId <KeyVaultKeyId> -ServerName <LogicalServerName> -ResourceGroupName <SQLDatabaseResourceGroupName> ```
-# [The Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli)
- For general database settings, see [az sql](/cli/azure/sql).
Check the following if an issue occurs:
Get-AzSubscription -SubscriptionId <SubscriptionId> ```
- # [The Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/azure-cli)
```azurecli az account show - s <SubscriptionId>
Check the following if an issue occurs:
* * * - If the new key cannot be added to the server, or the new key cannot be updated as the TDE Protector, check the following:
- - The key should not have an expiration date
- - The key must have the *get*, *wrap key*, and *unwrap key* operations enabled.
+ - The key should not have an expiration date
+ - The key must have the *get*, *wrap key*, and *unwrap key* operations enabled.
## Next steps
azure-sql Understand Resolve Blocking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/understand-resolve-blocking.md
The Waittype, Open_Tran, and Status columns refer to information returned by [sy
* [Quickstart: Extended events in SQL Server](/sql/relational-databases/extended-events/quick-start-extended-events-in-sql-server) * [Intelligent Insights using AI to monitor and troubleshoot database performance](intelligent-insights-overview.md)
-## Learn more
+## Next steps
* [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning) * [Deliver consistent performance with Azure SQL](/learn/modules/azure-sql-performance/) * [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md) * [Transient Fault Handling](/aspnet/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/transient-fault-handling) * [Configure the max degree of parallelism (MAXDOP) in Azure SQL Database](configure-max-degree-of-parallelism.md)
+* [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
azure-sql Identify Query Performance Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/identify-query-performance-issues.md
DMVs that track Query Store and wait statistics show results for only successful
> - [TigerToolbox waits and latches](https://github.com/Microsoft/tigertoolbox/tree/master/Waits-and-Latches) > - [TigerToolbox usp_whatsup](https://github.com/Microsoft/tigertoolbox/tree/master/usp_WhatsUp)
-## See also
-
-* [Configure the max degree of parallelism (MAXDOP) in Azure SQL Database](database/configure-max-degree-of-parallelism.md)
-* [Understand and resolve Azure SQL Database blocking problems in Azure SQL Database](database/understand-resolve-blocking.md)
- ## Next steps
-* [SQL Database monitoring and tuning overview](database/monitor-tune-overview.md)
+- [Configure the max degree of parallelism (MAXDOP) in Azure SQL Database](database/configure-max-degree-of-parallelism.md)
+- [Understand and resolve Azure SQL Database blocking problems in Azure SQL Database](database/understand-resolve-blocking.md)
+- [Diagnose and troubleshoot high CPU on Azure SQL Database](database/high-cpu-diagnose-troubleshoot.md)
+- [SQL Database monitoring and tuning overview](database/monitor-tune-overview.md)
+- [Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views](database/monitoring-with-dmvs.md)
azure-sql Api References Create Manage Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/api-references-create-manage-instance.md
Last updated 03/12/2019 # Managed API reference for Azure SQL Managed Instance+ [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)] You can create and configure managed instances of Azure SQL Managed Instance using the Azure portal, PowerShell, Azure CLI, REST API, and Transact-SQL. In this article, you can find an overview of the functions and the API that you can use to create and configure managed instances.
To create and manage managed instances with Azure PowerShell, use the following
## Azure CLI: Create and configure managed instances
-To create and configure managed instances with [Azure CLI](/cli/azure), use the following [Azure CLI commands for SQL Managed Instance](/cli/azure/sql/mi). Use [Azure Cloud Shell](../../cloud-shell/overview.md) to run the CLI in your browser, or [install](/cli/azure/install-azure-cli) it on macOS, Linux, or Windows.
+To create and configure managed instances with [Azure CLI](/cli/azure), use the following [Azure CLI commands for SQL Managed Instance](/cli/azure/sql/mi). Use [Azure Cloud Shell](../../cloud-shell/overview.md) to run Azure CLI in your browser, or [install](/cli/azure/install-azure-cli) it on macOS, Linux, or Windows.
> [!TIP] > For an Azure CLI quickstart, see [Working with SQL Managed Instance using Azure CLI](https://medium.com/azure-sqldb-managed-instance/working-with-sql-managed-instance-using-azure-cli-611795fe0b44).
To create and configure managed instances, use these REST API requests.
## Next steps - To learn about migrating a SQL Server database to Azure, see [Migrate to Azure SQL Database](../database/migrate-to-database-from-sql-server.md).-- For information about supported features, see [Features](../database/features-comparison.md).
+- For information about supported features, see [Features](../database/features-comparison.md).
azure-sql Create Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/create-template-quickstart.md
Remove-AzResourceGroup -Name $resourceGroupName
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
echo "Enter the Resource Group name:" && read resourceGroupName && az group delete --name $resourceGroupName
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Previously updated : 12/20/2021 Last updated : 01/04/2022 # Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article explains how to manually configure database migration from SQL Server 2008-2019 to Azure SQL Managed Instance by using Log Replay Service (LRS), currently in public preview. LRS is a cloud service enabled for SQL Managed Instance based on SQL Server log-shipping technology.
+This article explains how to manually configure database migration from SQL Server 2008-2019 to Azure SQL Managed Instance by using Log Replay Service (LRS), currently in public preview. LRS is a free of charge cloud service enabled for SQL Managed Instance based on SQL Server log-shipping technology.
[Azure Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md) and LRS use the same underlying migration technology and the same APIs. By releasing LRS, we're further enabling complex custom migrations and hybrid architectures between on-premises SQL Server and SQL Managed Instance.
After LRS is stopped, either automatically through autocomplete, or manually thr
- Shared access signature (SAS) security token with read and list permissions generated for the Blob Storage container ### Azure RBAC permissions+ Running LRS through the provided clients requires one of the following Azure roles: - Subscription Owner role - [Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role - Custom role with the following permission: `Microsoft.Sql/managedInstances/databases/*`
+## Requirements
+
+Please ensure the following requirements are met:
+- Use the full recovery model on SQL Server (mandatory).
+- Use `CHECKSUM` for backups on SQL Server (mandatory).
+- Place backup files for an individual database inside a separate folder in a flat-file structure (mandatory). Nested folders inside database folders are not supported.
+- Plan to complete the migration within 36 hours after you start LRS (mandatory). This is a grace period during which system-managed software patches are postponed.
+ ## Best practices We recommend the following best practices: - Run [Data Migration Assistant](/sql/dma/dma-overview) to validate that your databases are ready to be migrated to SQL Managed Instance. - Split full and differential backups into multiple files, instead of using a single file.-- Use the full recovery model (mandatory).-- Use `CHECKSUM` for backups (mandatory).-- Enable backup compression.-- Place backup files for an individual database inside a separate folder. Use flat-file structure as nested folders are not supported.
+- Enable backup compression to help the network transfer speeds.
- Use Cloud Shell to run PowerShell or CLI scripts, because it will always be updated to the latest cmdlets released.-- Plan to complete the migration within 36 hours after you start LRS. This is a grace period during which system-managed software patches are postponed. > [!IMPORTANT] > - You can't use databases being restored through LRS until the migration process completes.
Copy the parameters as follows:
:::image type="content" source="./media/log-replay-service-migrate/lrs-token-uri-copy-part-01.png" alt-text="Screenshot that shows copying the first part of the token.":::
-2. Copy the second part of the token, starting from the question mark (`?`) all the way until the end of the string. Use it as the `StorageContainerSasToken` parameter in PowerShell or the Azure CLI for starting LRS.
+2. Copy the second part of the token, starting after the question mark (`?`) all the way until the end of the string. Use it as the `StorageContainerSasToken` parameter in PowerShell or the Azure CLI for starting LRS.
:::image type="content" source="./media/log-replay-service-migrate/lrs-token-uri-copy-part-02.png" alt-text="Screenshot that shows copying the second part of the token."::: > [!NOTE]
-> Don't include the question mark when you copy either part of the token.
+> Don't include the question mark (`?`) when you copy either part of the token.
> ### Log in to Azure and select a subscription
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
Hardware generations have different characteristics, as described in the followi
### Regional support for premium-series hardware generations (preview) Support for the premium-series hardware generations (public preview) is currently available only in these specific regions: <br>
-Last updated: 11/24/2021
| Region | **Premium-series** | **Memory optimized premium-series** | |: |: |: |
+| Canada Central | Yes | |
+| Canada East | Yes | |
| Central US | Yes | Yes |
-| East US | Yes | |
+| East US | Yes | Yes |
| East US 2 | Yes | Yes |
+| France Central | | Yes |
| North Central US | Yes | Yes | | North Europe | Yes | Yes |
-| South Central US | Yes | |
-| UK South | Yes | Yes |
+| South Central US | Yes | Yes |
+| Southeast Asia | Yes | |
+| UK South | | Yes |
| West Europe | Yes | Yes | | West US | Yes | Yes | | West US 2 | Yes | Yes | | West US 3 | Yes | Yes | - ### In-memory OLTP available space The amount of in-memory OLTP space in [Business Critical](../database/service-tier-business-critical.md) service tier depends on the number of vCores and hardware generation. The following table lists the limits of memory that can be used for in-memory OLTP objects.
azure-sql Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-cli.md
+
+ Title: "Azure CLI example: Create a managed instance"
+description: Use this Azure CLI example script to create a managed instance in Azure SQL Managed Instance
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to create an Azure SQL Managed Instance
+
+This Azure CLI script example creates an Azure SQL Managed Instance in a dedicated subnet within a new virtual network. It also configures a route table and a network security group for the virtual network. Once the script has been successfully run, the managed instance can be accessed from within the virtual network or from an on-premises environment. See [Configure Azure VM to connect to an Azure SQL Managed Instance](../../../azure-sql/managed-instance/connect-vm-instance-configure.md) and [Configure a point-to-site connection to an Azure SQL Managed Instance from on-premises](../../../azure-sql/managed-instance/point-to-site-p2s-configure.md).
+
+> [!IMPORTANT]
+> For limitations, see [supported regions](../../../azure-sql/managed-instance/resource-limits.md#supported-regions) and [supported subscription types](../../../azure-sql/managed-instance/resource-limits.md#supported-subscription-types).
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Sample script
+
+### Sign in to Azure
+
+For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az network vnet](/cli/azure/network/vnet) | Virtual network commands. |
+| [az network vnet subnet](/cli/azure/network/vnet/subnet) | Virtual network subnet commands. |
+| [az network route-table](/cli/azure/network/route-table) | Network route table commands. |
+| [az sql mi](/cli/azure/sql/mi) | SQL Managed Instance commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
+
+ Title: "Azure CLI example: Restore geo-backup - Azure SQL Database"
+description: Use this Azure CLI example script to restore an Azure SQL Managed Instance Database from a geo-redundant backup.
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Use CLI to restore a Managed Instance database to another geo-region
+
+This Azure CLI script example restores an Azure SQL Managed Instance database from a remote geo-region (geo-restore) to a point in time.
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Prerequisites
+
+An existing pair of managed instances, see [Use Azure CLI to create an Azure SQL Managed Instance](create-configure-managed-instance-cli.md) to create a pair of managed instances in different regions.
+
+## Sample script
+
+### Sign in to Azure
+
+For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Script | Description |
+|||
+| [az sql midb](/cli/azure/sql/midb) | Managed Instance Database commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-sql Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
+
+ Title: "Azure CLI example: Enable BYOK TDE - Azure SQL Managed Instance"
+description: "Learn how to configure an Azure SQL Managed Instance to start using BYOK Transparent Data Encryption (TDE) for encryption-at-rest using PowerShell."
++++
+ms.devlang: azurecli
++++ Last updated : 12/23/2021++
+# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
+
+This Azure CLI script example configures Transparent Data Encryption (TDE) with customer-managed key for Azure SQL Managed Instance, using a key from Azure Key Vault. This is often referred to as a Bring Your Own Key scenario for TDE. To learn more about the TDE with customer-managed key, see [TDE Bring Your Own Key to Azure SQL](../../../azure-sql/database/transparent-data-encryption-byok-overview.md).
+
+If you choose to install and use Azure CLI locally, this article requires that you are running Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+> [!IMPORTANT]
+> When running Bash on Windows, run this script from within a Docker container.
+
+## Prerequisites
+
+An existing Managed Instance, see [Use Azure CLI to create an Azure SQL Managed Instance](create-configure-managed-instance-cli.md).
+
+## Sample script
+
+### Sign in to Azure
+
+For this script, use Azure CLI locally as it takes too long to run in Cloud Shell. Use the following script to sign in using a specific subscription. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+
+```azurecli-interactive
+subscription="<subscriptionId>" # add subscription here
+
+az account set -s $subscription # ...or use 'az login'
+```
+
+For more information, see [set active subscription](/cli/azure/account#az_account_set) or [log in interactively](/cli/azure/reference-index#az_login)
+
+### Run the script
++
+### Clean up resources
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command- unless you have additional needs for these resources. Some of these resources may take a while to create, as well as to delete.
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the following commands. Each command in the table links to command specific documentation.
+
+| Command | Description |
+|||
+| [az sql db](/cli/azure/sql/db) | Database commands. |
+| [az sql failover-group](/cli/azure/sql/failover-group) | Failover group commands. |
+
+## Next steps
+
+For more information on Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../../azure-sql/database/az-cli-script-samples-content-guide.md).
azure-video-analyzer Visualize Ai Events Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/visualize-ai-events-power-bi.md
In this tutorial, you will:
> [!TIP] > > - The [Line Crossing sample](use-line-crossing.md) uses a 5-minute video recording. For best results in visualization, use the 60-minute recording of vehicles on a freeway available in [Other dataset](https://github.com/Azure/video-analyzer/tree/main/media#other-dataset).
- > - Refer Configuration and deployment section in [FAQs](https://github.com/MicrosoftDocs/azure-docs-pr/pull/edge/faq.yml) on how to add sample video files to rtsp simulator. Once added, edit `rtspUrl` value to point to the new video file.
+ > - Refer Configuration and deployment section in [FAQs](edge/faq.yml) on how to add sample video files to rtsp simulator. Once added, edit `rtspUrl` value to point to the new video file.
> - If you followed the Line Crossing sample and are using the [AVA C# sample repository](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp), then edit operations.json file at properties -> parameters -> value to `"rtsp://rtspsim:554/media/camera-3600s.mkv"` to change video source to 60-minute recording. - A [Power BI](https://powerbi.microsoft.com/) account.
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 12/10/2021 Last updated : 01/03/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
* Bug fixes * Deprecated functionality
-## December 2021
+## December 2021
+
+### The projects feature is now GA
+
+The projects feature is now GA and ready for productive use. There is no pricing impact related to the "Preview to GA" transition. See [Add video clips to your projects](use-editor-create-project.md).
+
+### New source languages support for STT, translation, and search on API level
+
+Video Analyzer for Media introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the API level.
### Matched person detection capability
backup Backup Azure Dataprotection Use Rest Api Create Update Backup Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-dataprotection-use-rest-api-create-update-backup-vault.md
ms.assetid: 93861379-5bec-4ed5-95d2-46f534a115fd
# Create Azure Backup vault using REST API
-Azure Backup's new Data Protection platform provides enhanced capabilities for backup and restore for newer workloads such as blobs in storage accounts, managed disk and PostGre SQL server's PaaS platform. It aims to minimize management overhead while making it easy for organizing backups. A 'Backup vault' is the cornerstone of the Data protection platform and this is different from the 'Recovery Services' vault.
+Azure Backup's new Data Protection platform provides enhanced capabilities for backup and restore for newer workloads such as blobs in storage accounts, managed disk and PostgreSQL server's PaaS platform. It aims to minimize management overhead while making it easy for organizing backups. A 'Backup vault' is the cornerstone of the Data protection platform and this is different from the 'Recovery Services' vault.
The steps to create an Azure Backup vault using REST API are outlined in [create vault REST API](/rest/api/dataprotection/backup-vaults/create-or-update) documentation. Let's use this document as a reference to create a vault called "testBkpVault" in "West US" and under 'TestBkpVaultRG' resource group.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding min
| | | | | | Create Recovery Services vault | Backup Contributor | Resource group containing the vault | | | Enable backup of Azure VMs | Backup Operator | Resource group containing the vault | |
-| | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
| On-demand backup of VM | Backup Operator | Recovery Services vault | | | Restore VM | Backup Operator | Recovery Services vault | |
-| | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
-| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read|
| Restore unmanaged disks VM backup | Backup Operator | Recovery Services vault |
-| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
| | Storage Account Contributor | Storage account resource where disks are going to be restored | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Storage/storageAccounts/write | | Restore managed disks from VM backup | Backup Operator | Recovery Services vault |
-| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
| | Storage Account Contributor | Temporary Storage account selected as part of restore to hold data from vault before converting them to managed disks | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Storage/storageAccounts/write | | | Contributor | Resource group to which managed disk(s) will be restored | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write| | Restore individual files from VM backup | Backup Operator | Recovery Services vault |
-| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
| Cross region restore | Backup Operator | Subscription of the recovery Services vault | This is in addition of the restore permissions mentioned above. Specifically for CRR, instead of a built-in-role, you can consider a custom role which has the following permissions: "Microsoft.RecoveryServices/locations/backupAadProperties/read" "Microsoft.RecoveryServices/locations/backupCrrJobs/action" "Microsoft.RecoveryServices/locations/backupCrrJob/action" "Microsoft.RecoveryServices/locations/backupCrossRegionRestore/action" "Microsoft.RecoveryServices/locations/backupCrrOperationResults/read" "Microsoft.RecoveryServices/locations/backupCrrOperationsStatus/read" | | Create backup policy for Azure VM backup | Backup Contributor | Recovery Services vault | | Modify backup policy of Azure VM backup | Backup Contributor | Recovery Services vault |
The following table captures the Backup management actions and corresponding min
| | | | | | Create Recovery Services vault | Backup Contributor | Resource group containing the vault | | | Enable backup of SQL and/or HANA databases | Backup Operator | Resource group containing the vault | |
-| | Virtual Machine Contributor | VM resource where DB is installed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | VM resource where DB is installed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
| On-demand backup of DB | Backup Operator | Recovery Services vault | | | Restore database or Restore as files | Backup Operator | Recovery Services vault | |
-| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
-| | Virtual Machine Contributor | Target VM in which DB will be restored or files are created | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
+| | Virtual Machine Contributor | Target VM in which DB will be restored or files are created | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
| Create backup policy for Azure VM backup | Backup Contributor | Recovery Services vault | | Modify backup policy of Azure VM backup | Backup Contributor | Recovery Services vault | | Delete backup policy of Azure VM backup | Backup Contributor | Recovery Services vault |
backup Guidance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/guidance-best-practices.md
Azure Backup provides you with the [Multi-User Authorization (MUA)](/azure/backu
### Monitoring and alerts of suspicious activity
-You may encounter scenarios where someone tries to breach into your system and maliciously turn off the security mechanisms, such as disabling Soft= Delete or attempts to perform destructive operations, such as deleting the backup resources.
+You may encounter scenarios where someone tries to breach into your system and maliciously turn off the security mechanisms, such as disabling Soft Delete or attempts to perform destructive operations, such as deleting the backup resources.
Azure Backup provides security against such incidents by sending you critical alerts over your preferred notification channel (email, ITSM, Webhook, runbook, and sp pn) by creating an [Action Rule](/azure/azure-monitor/alerts/alerts-action-rules) on top of the alert. [Learn more](/azure/backup/security-overview#monitoring-and-alerts-of-suspicious-activity)
batch Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/error-handling.md
Reimaging a node reinstalls the operating system. Start tasks and job preparatio
Removing the node from the pool is sometimes necessary. -- Batch REST API: [removenodes](/rest/api/batchservice/computenode/removenodes)
+- Batch REST API: [removenodes](/rest/api/batchservice/pool/remove-nodes)
- Batch .NET API: [pooloperations](/dotnet/api/microsoft.azure.batch.pooloperations) ### Disable task scheduling on node
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 10/11/2021 Last updated : 01/04/2022 tags: connectors
To add an Azure Blob trigger to a logic app workflow in single-tenant Azure Logi
1. If you're prompted for connection details, [create your Azure Blob Storage connection now](#connect-blob-storage-account).
-1. Provide the necessary information for the trigger. On the **Parameters** tab, add the **Blob Path** for the blob that you want to monitor.
+1. Provide the necessary information for the trigger. On the **Parameters** tab, in the **Blob Path** property, enter the name of the folder that you want to monitor.
- 1. To find your blob path, open your storage account in the Azure portal.
+ 1. To find the folder name, open your storage account in the Azure portal.
1. In the navigation menu, under **Data Storage**, select **Containers**.
- 1. Select your blob container. On the container navigation menu, under **Settings**, select **Properties**.
+ 1. Select your blob container. Find the name for the folder that you want to monitor.
- 1. Copy the **URL** value, which is the path to the blob. The path resembles `https://<storage-container-name>/<folder-name>/{name}`. Provide your container name and folder name instead, but keep the `{name}` literal string.
+ 1. Return to the workflow designer. In the trigger's **Blob Path** property, enter the folder name, for example:
:::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-configure.png" alt-text="Screenshot showing the workflow designer for a Standard logic app workflow with a Blob Storage trigger and parameters configuration.":::
Now, you can call the [Blob service REST API](/rest/api/storageservices/blob-ser
## Next steps
-[Connectors overview for Azure Logic Apps](apis-list.md)
+[Connectors overview for Azure Logic Apps](apis-list.md)
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/linux-emulator.md
Title: Run the Azure Cosmos DB emulator on Docker for Linux
+ Title: Run the Azure Cosmos DB Emulator on Docker for Linux
description: Learn how to run and use the Azure Cosmos DB Linux Emulator on Linux, and macOS. Using the emulator you can develop and test your application locally for free, without an Azure subscription.
The number of physical partitions provisioned on the emulator is too low. Either
- If the emulator fails to start with the following error: ```bash
- "Failed loading Emulator secrets certificate. Error: 0x8009000f or similar, a new policy might have been added to your host that prevents an application such as Azure Cosmos DB emulator from creating and adding self signed certificate files into your certificate store."
+ "Failed loading Emulator secrets certificate. Error: 0x8009000f or similar, a new policy might have been added to your host that prevents an application such as Azure Cosmos DB Emulator from creating and adding self signed certificate files into your certificate store."
``` This can be the case even when you run in Administrator context, since the specific policy usually added by your IT department takes priority over the local Administrator. Using a Docker image for the emulator instead might help in this case, as long as you still have the permission to add the self-signed emulator SSL certificate into your host machine context (this is required by Java and .NET Cosmos SDK client application).
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/how-to-create-container.md
Previously updated : 10/16/2020 Last updated : 01/03/2022 ms.devlang: csharp
This article explains the different ways to create a container in Azure Cosmos D
If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article. ```csharp
-// Create a container with a partition key and provision 1000 RU/s throughput.
-DocumentCollection myCollection = new DocumentCollection();
-myCollection.Id = "myContainerName";
-myCollection.PartitionKey.Paths.Add("/myPartitionKey");
-
-await client.CreateDocumentCollectionAsync(
- UriFactory.CreateDatabaseUri("myDatabaseName"),
- myCollection,
- new RequestOptions { OfferThroughput = 1000 });
+// Create a container with a partition key and provision 400 RU/s manual throughput.
+CosmosClient client = new CosmosClient(connectionString, clientOptions);
+Database database = await client.CreateDatabaseIfNotExistsAsync(databaseId);
+
+ContainerProperties containerProperties = new ContainerProperties()
+{
+ Id = containerId,
+ PartitionKeyPath = "/myPartitionKey"
+};
+
+var throughput = ThroughputProperties.CreateManualThroughput(400);
+Container container = await database.CreateContainerIfNotExistsAsync(containerProperties, throughput);
``` ## Next steps
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
The following table lists the terms and descriptions shown on the Credits tab.
| Credit applied toward charges | Total amount of the invoice or credit generated | | Ending credit | Credit end balance |
+Below are the Accounting codes and description for the adjustments.
+
+| **Accounting Code** | **Description** |
+| | |
+| F2 | Contractual Credit |
+| F3 | Strategic Investment Credit: Future Utilization Credit |
+| O1 | Offer Conversion Credit |
+| O2 | Pricing or Billing Credit |
+| O3 | Deployment Credit |
+| O4 | Offset Service Credit |
+| O5 | Coverage Gap Credit |
+| O6 | Subscription Interrupt Credit |
+| O7 | Technical Concession Credit |
+| O8 | Usage Emission Credit |
+| O9 | Fraud False Positive Credit |
+| O10 | Pricing Alignment Credit |
+| O11 | Sponsorship Continuity Credit |
+| O12 | Exchange Rate Reconciliation Credit |
+| O13 | Microsoft Internal Credit |
+| O14 | Supporting Documentation Credit |
+| O15 | Support Troubleshooting Credit |
+| O16 | Data Center Credit |
+| O17 | Backdated Pricing Credit |
+| O18 | Strategic Investment Credit: Offset of Past Utilization |
+| O19 | Licensing Benefit Credit |
+| O20 | Return of Reservation Credit |
+| O21 | Service Level Agreement Credit |
+| P1 | Custom Billing Credit |
+| P2 | Strategic Investment Credit: Planned Usage Credit |
+| T1 | Contractual Fund Transfer |
+| T2 | Strategic Investment Credit: Transfer of Funds |
+| T3 | Volume Licensing Reconciliation Credit |
+| T4 | Separate Channel Balance Transfer |
+| T5 | Reservations - Exchange |
+| U1 | Latent Onboarding Credit |
+| U2 | Funding Transfer |
+| U3 | Contract Term Transfer |
+| U4 | Strategic Investment Credit: Transfer of Utilization |
+ ## Review reservation transaction details You can view all the reservations placed for an Enterprise Agreement in the Azure portal.
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
Previously updated : 12/08/2021 Last updated : 12/28/2021 # Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory
The Azure Cosmos DB (SQL API) connector supports the following authentication ty
- [Key authentication](#key-authentication) - [Service principal authentication](#service-principal-authentication)-- [Managed identities for Azure resources authentication](#managed-identity)
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
### Key authentication
You can also store service principal key in Azure Key Vault.
} ```
-### <a name="managed-identity"></a> Managed identities for Azure resources authentication
+### <a name="managed-identity"></a> System-assigned managed identity authentication
>[!NOTE]
->Currently, the managed identity authentication is not supported in data flow.
+>Currently, the system-assigned managed identity authentication is not supported in data flow.
-A data factory or Synapse pipeline can be associated with a [managed identity for Azure resources](data-factory-service-identity.md), which represents this specific service instance. You can directly use this managed identity for Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Cosmos DB.
+A data factory or Synapse pipeline can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Cosmos DB.
-To use managed identities for Azure resource authentication, follow these steps.
+To use system-assigned managed identities for Azure resource authentication, follow these steps.
-1. [Retrieve the managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your service.
+1. [Retrieve the system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your service.
-2. Grant the managed identity proper permission. See examples on how permission works in Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the managed identity.
+2. Grant the system-assigned managed identity proper permission. See examples on how permission works in Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the system-assigned managed identity.
These properties are supported for the linked service:
These properties are supported for the linked service:
} } ```
+### User-assigned managed identity authentication
+
+>[!NOTE]
+>Currently, the user-assigned managed identity authentication is not supported in data flow.
+
+A data factory or Synapse pipeline can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Cosmos DB.
+
+To use user-assigned managed identities for Azure resource authentication, follow these steps.
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant the user-assigned managed identity proper permission. See examples on how permission works in Cosmos DB from [Access control lists on files and directories](../cosmos-db/how-to-setup-rbac.md). More specifically, create a role definition, and assign the role to the user-assigned managed identity.
+
+2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
+
+These properties are supported for the linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **CosmosDb**. |Yes |
+| accountEndpoint | Specify the account endpoint URL for the Azure Cosmos DB. | Yes |
+| database | Specify the name of the database. | Yes |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example:**
+
+```json
+{
+ "name": "CosmosDbSQLAPILinkedService",
+ "properties": {
+ "type": "CosmosDb",
+ "typeProperties": {
+ "accountEndpoint": "<account endpoint>",
+ "database": "<database name>",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
## Dataset properties
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 12/20/2021 Last updated : 12/29/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
The following properties are supported for an Azure Synapse Analytics linked ser
| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal. | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal. | | azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are `AzurePublic`, `AzureChina`, `AzureUsGovernment`, and `AzureGermany`. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication. |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | No | For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively: - [SQL authentication](#sql-authentication)-- Azure AD application token authentication: [Service principal](#service-principal-authentication)-- Azure AD application token authentication: [Managed identities for Azure resources](#managed-identity)
+- [Service principal authentication](#service-principal-authentication)
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
>[!TIP] >When creating linked service for Azure Synapse **serverless** SQL pool from UI, choose "enter manually" instead of browsing from subscription.
To use service principal-based Azure AD application token authentication, follow
} ```
-### <a name="managed-identity"></a> Managed identities for Azure resources authentication
+### <a name="managed-identity"></a> System-assigned managed identities for Azure resources authentication
-A data factory or Synapse workspace can be associated with a [managed identity for Azure resources](data-factory-service-identity.md) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
+A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
-To use managed identity authentication, follow these steps:
+To use system-assigned managed identity authentication, follow these steps:
-1. **[Provision an Azure Active Directory administrator](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with managed identity an admin role, skip steps 3 and 4. The administrator will have full access to the database.
+1. **[Provision an Azure Active Directory administrator](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with system-assigned managed identity an admin role, skip steps 3 and 4. The administrator will have full access to the database.
-2. **[Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)** for the Managed Identity. Connect to the data warehouse from or to which you want to copy data by using tools like SSMS, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL.
+2. **[Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)** for the system-assigned managed identity. Connect to the data warehouse from or to which you want to copy data by using tools like SSMS, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL.
```sql CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER; ```
-3. **Grant the Managed Identity needed permissions** as you normally do for SQL users and others. Run the following code, or refer to more options [here](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql). If you want to use PolyBase to load the data, learn the [required database permission](#required-database-permission).
+3. **Grant the system-assigned managed identity needed permissions** as you normally do for SQL users and others. Run the following code, or refer to more options [here](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql). If you want to use PolyBase to load the data, learn the [required database permission](#required-database-permission).
```sql EXEC sp_addrolemember db_owner, [your_resource_name];
To use managed identity authentication, follow these steps:
} } ```
+### User-assigned managed identity authentication
+
+A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
+
+To use user-assigned managed identity authentication, follow these steps:
+
+1. **[Provision an Azure Active Directory administrator](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with user-assigned managed identity an admin role, skip steps 3. The administrator will have full access to the database.
+
+2. **[Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)** for the user-assigned managed identity. Connect to the data warehouse from or to which you want to copy data by using tools like SSMS, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following T-SQL.
+
+ ```sql
+ CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER;
+ ```
+
+3. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and **grant the user-assigned managed identity needed permissions** as you normally do for SQL users and others. Run the following code, or refer to more options [here](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql). If you want to use PolyBase to load the data, learn the [required database permission](#required-database-permission).
+
+ ```sql
+ EXEC sp_addrolemember db_owner, [your_resource_name];
+ ```
+4. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
+
+5. **Configure an Azure Synapse Analytics linked service**.
+
+**Example:**
+
+```json
+{
+ "name": "AzureSqlDWLinkedService",
+ "properties": {
+ "type": "AzureSqlDW",
+ "typeProperties": {
+ "connectionString": "Server=tcp:<servername>.database.windows.net,1433;Database=<databasename>;Connection Timeout=30",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
## Dataset properties
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 12/22/2021 Last updated : 12/28/2021 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
The following properties are supported for the SQL Managed Instance linked servi
| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal | | azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No | | alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication |
| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use a self-hosted integration runtime or an Azure integration runtime if your managed instance has a public endpoint and allows the service to access it. If not specified, the default Azure integration runtime is used. |Yes | > [!NOTE]
The following properties are supported for the SQL Managed Instance linked servi
For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively: - [SQL authentication](#sql-authentication)-- [Azure AD application token authentication: Service principal](#service-principal-authentication)-- [Azure AD application token authentication: Managed identities for Azure resources](#managed-identity)
+- [Service principal authentication](#service-principal-authentication)
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
### SQL authentication
To use a service principal-based Azure AD application token authentication, foll
- Application key - Tenant ID
-3. [Create logins](/sql/t-sql/statements/create-login-transact-sql) for the managed identity. In SQL Server Management Studio (SSMS), connect to your managed instance using a SQL Server account that is a **sysadmin**. In **master** database, run the following T-SQL:
+3. [Create logins](/sql/t-sql/statements/create-login-transact-sql) for the service principal. In SQL Server Management Studio (SSMS), connect to your managed instance using a SQL Server account that is a **sysadmin**. In **master** database, run the following T-SQL:
```sql CREATE LOGIN [your application name] FROM EXTERNAL PROVIDER ```
-4. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the managed identity. Connect to the database from or to which you want to copy data, run the following T-SQL:
+4. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the service principal. Connect to the database from or to which you want to copy data, run the following T-SQL:
```sql CREATE USER [your application name] FROM EXTERNAL PROVIDER ```
-5. Grant the managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/t-sql/statements/alter-role-transact-sql).
+5. Grant the service principal needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/t-sql/statements/alter-role-transact-sql).
```sql ALTER ROLE [role name e.g. db_owner] ADD MEMBER [your application name]
To use a service principal-based Azure AD application token authentication, foll
} ```
-### <a name="managed-identity"></a> Managed identities for Azure resources authentication
+### <a name="managed-identity"></a> System-assigned managed identity authentication
-A data factory or Synapse workspace can be associated with a [managed identity for Azure resources](data-factory-service-identity.md) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
+A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
-To use managed identity authentication, follow these steps.
+To use system-assigned managed identity authentication, follow these steps.
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance).
-2. [Create logins](/sql/t-sql/statements/create-login-transact-sql) for the managed identity. In SQL Server Management Studio (SSMS), connect to your managed instance using a SQL Server account that is a **sysadmin**. In **master** database, run the following T-SQL:
+2. [Create logins](/sql/t-sql/statements/create-login-transact-sql) for the system-assigned managed identity. In SQL Server Management Studio (SSMS), connect to your managed instance using a SQL Server account that is a **sysadmin**. In **master** database, run the following T-SQL:
```sql CREATE LOGIN [your_factory_or_workspace_ name] FROM EXTERNAL PROVIDER ```
-3. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the managed identity. Connect to the database from or to which you want to copy data, run the following T-SQL:
+3. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the system-assigned managed identity. Connect to the database from or to which you want to copy data, run the following T-SQL:
```sql CREATE USER [your_factory_or_workspace_name] FROM EXTERNAL PROVIDER ```
-4. Grant the managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/t-sql/statements/alter-role-transact-sql).
+4. Grant the system-assigned managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/t-sql/statements/alter-role-transact-sql).
```sql ALTER ROLE [role name e.g. db_owner] ADD MEMBER [your_factory_or_workspace_name]
To use managed identity authentication, follow these steps.
5. Configure a SQL Managed Instance linked service.
-**Example: uses managed identity authentication**
+**Example: uses system-assigned managed identity authentication**
```json {
To use managed identity authentication, follow these steps.
} } ```
+### User-assigned managed identity authentication
+
+A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
+
+To use user-assigned managed identity authentication, follow these steps.
+
+1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance).
+
+2. [Create logins](/sql/t-sql/statements/create-login-transact-sql) for the user-assigned managed identity. In SQL Server Management Studio (SSMS), connect to your managed instance using a SQL Server account that is a **sysadmin**. In **master** database, run the following T-SQL:
+
+ ```sql
+ CREATE LOGIN [your_factory_or_workspace_ name] FROM EXTERNAL PROVIDER
+ ```
+
+3. [Create contained database users](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities) for the user-assigned managed identity. Connect to the database from or to which you want to copy data, run the following T-SQL:
+
+ ```sql
+ CREATE USER [your_factory_or_workspace_name] FROM EXTERNAL PROVIDER
+ ```
+
+4. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant the user-assigned managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/t-sql/statements/alter-role-transact-sql).
+
+ ```sql
+ ALTER ROLE [role name e.g. db_owner] ADD MEMBER [your_factory_or_workspace_name]
+ ```
+5. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
+
+6. Configure a SQL Managed Instance linked service.
+
+**Example: uses user-assigned managed identity authentication**
+
+```json
+{
+ "name": "AzureSqlDbLinkedService",
+ "properties": {
+ "type": "AzureSqlMI",
+ "typeProperties": {
+ "connectionString": "Data Source=<hostname,port>;Initial Catalog=<databasename>;",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
## Dataset properties
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 09/09/2021 Last updated : 12/31/2021 # Copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
The following properties are supported for the Dynamics linked service.
| type | The type property must be set to "Dynamics", "DynamicsCrm", or "CommonDataServiceForApps". | Yes | | deploymentType | The deployment type of the Dynamics instance. The value must be "Online" for Dynamics online. | Yes | | serviceUri | The service URL of your Dynamics instance, the same one you access from browser. An example is "https://\<organization-name>.crm[x].dynamics.com". | Yes |
-| authenticationType | The authentication type to connect to a Dynamics server. Valid values are "AADServicePrincipal" and "Office365". | Yes |
+| authenticationType | The authentication type to connect to a Dynamics server. Valid values are "AADServicePrincipal", "Office365" and "ManagedIdentity". | Yes |
| servicePrincipalId | The client ID of the Azure AD application. | Yes when authentication is "AADServicePrincipal" | | servicePrincipalCredentialType | The credential type to use for service-principal authentication. Valid values are "ServicePrincipalKey" and "ServicePrincipalCert". | Yes when authentication is "AADServicePrincipal" | | servicePrincipalCredential | The service-principal credential. <br/><br/>When you use "ServicePrincipalKey" as the credential type, `servicePrincipalCredential` can be a string that the service encrypts upon linked service deployment. Or it can be a reference to a secret in Azure Key Vault. <br/><br/>When you use "ServicePrincipalCert" as the credential, `servicePrincipalCredential` must be a reference to a certificate in Azure Key Vault. | Yes when authentication is "AADServicePrincipal" | | username | The username to connect to Dynamics. | Yes when authentication is "Office365" | | password | The password for the user account you specified as the username. Mark this field with "SecureString" to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes when authentication is "Office365" |
+| credentials | Specify the user-assigned managed identity as the credential object. <br/><br/> [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md), assign them to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.| Yes when authentication is "ManagedIdentity" |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No | >[!NOTE]
The following properties are supported for the Dynamics linked service.
} } ```
+#### Example: Dynamics online using user-assigned managed identity authentication
+```json
+{
+ "name": "DynamicsLinkedService",
+ "properties": {
+ "type": "Dynamics",
+ "typeProperties": {
+ "deploymentType": "Online",
+ "serviceUri": "https://<organization-name>.crm[x].dynamics.com",
+ "authenticationType": "ManagedIdentity",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
### Dynamics 365 and Dynamics CRM on-premises with IFD Additional properties that compare to Dynamics online are **hostName** and **port**.
data-lake-store Data Lake Store Get Started Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-get-started-java-sdk.md
The code sample available [on GitHub](https://azure.microsoft.com/documentation/
```xml <dependencies>
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-data-lake-store-sdk</artifactId>
- <version>2.1.5</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-nop</artifactId>
- <version>1.7.21</version>
- </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.4.1</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-file-datalake</artifactId>
+ <version>12.7.2</version>
+ </dependency>
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-nop</artifactId>
+ <version>1.7.32</version>
+ </dependency>
</dependencies> ```
- The first dependency is to use the Data Lake Storage Gen1 SDK (`azure-data-lake-store-sdk`) from the maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen1 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep).
+ The second dependency is to use the Data Lake Storage Gen2 SDK (`azure-storage-file-datalake`) from the Maven repository. The third dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen2 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep).
3. Add the following import statements to your application. ```java
- import com.microsoft.azure.datalake.store.ADLException;
- import com.microsoft.azure.datalake.store.ADLStoreClient;
- import com.microsoft.azure.datalake.store.DirectoryEntry;
- import com.microsoft.azure.datalake.store.IfExists;
- import com.microsoft.azure.datalake.store.oauth2.AccessTokenProvider;
- import com.microsoft.azure.datalake.store.oauth2.ClientCredsTokenProvider;
+ import com.azure.identity.ClientSecretCredential;
+ import com.azure.identity.ClientSecretCredentialBuilder;
+ import com.azure.storage.file.datalake.DataLakeDirectoryClient;
+ import com.azure.storage.file.datalake.DataLakeFileClient;
+ import com.azure.storage.file.datalake.DataLakeServiceClient;
+ import com.azure.storage.file.datalake.DataLakeServiceClientBuilder;
+ import com.azure.storage.file.datalake.DataLakeFileSystemClient;
+ import com.azure.storage.file.datalake.models.ListPathsOptions;
+ import com.azure.storage.file.datalake.models.PathAccessControl;
+ import com.azure.storage.file.datalake.models.PathPermissions;
import java.io.*;
+ import java.time.Duration;
import java.util.Arrays; import java.util.List;
+ import java.util.Map;
``` ## Authentication
-* For end-user authentication for your application, see [End-user-authentication with Data Lake Storage Gen1 using Java](data-lake-store-end-user-authenticate-java-sdk.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md).
+* For end-user authentication for your application, see [End-user-authentication with Data Lake Storage Gen2 using Java](data-lake-store-end-user-authenticate-java-sdk.md).
+* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen2 using Java](data-lake-store-service-to-service-authenticate-java.md).
-## Create a Data Lake Storage Gen1 client
-Creating an [ADLStoreClient](https://azure.github.io/azure-data-lake-store-java/javadoc/) object requires you to specify the Data Lake Storage Gen1 account name and the token provider you generated when you authenticated with Data Lake Storage Gen1 (see [Authentication](#authentication) section). The Data Lake Storage Gen1 account name needs to be a fully qualified domain name. For example, replace **FILL-IN-HERE** with something like **mydatalakestoragegen1.azuredatalakestore.net**.
+## Create a Data Lake Storage Gen2 client
+Creating a [DataLakeServiceClient](https://azure.github.io/azure-sdk-for-java/datalakestorage%28gen2%29.html) object requires you to specify the Data Lake Storage Gen2 account name and the token provider you generated when you authenticated with Data Lake Storage Gen2 (see [Authentication](#authentication) section). The Data Lake Storage Gen2 account name needs to be a fully qualified domain name. For example, replace **FILL-IN-HERE** with something like **mydatalakestoragegen1.azuredatalakestore.net**.
```java
-private static String accountFQDN = "FILL-IN-HERE"; // full account FQDN, not just the account name
-ADLStoreClient client = ADLStoreClient.createClient(accountFQDN, provider);
+private static String endPoint = "FILL-IN-HERE"; // Data lake storage end point
+DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClientBuilder().endpoint(endPoint).credential(credential).buildClient();
```
-The code snippets in the following sections contain examples of some common filesystem operations. You can look at the full [Data Lake Storage Gen1 Java SDK API docs](https://azure.github.io/azure-data-lake-store-java/javadoc/) of the **ADLStoreClient** object to see other operations.
+The code snippets in the following sections contain examples of some common filesystem operations. You can look at the full [Data Lake Storage Gen2 Java SDK API docs](https://azure.github.io/azure-sdk-for-java/datalakestorage%28gen2%29.html) of the **DataLakeServiceClient** object to see other operations.
## Create a directory
-The following snippet creates a directory structure in the root of the Data Lake Storage Gen1 account you specified.
+The following snippet creates a directory structure in the root of the Data Lake Storage Gen2 account you specified.
```java // create directory
-client.createDirectory("/a/b/w");
+private String fileSystemName = "FILL-IN-HERE"
+DataLakeFileSystemClient dataLakeFileSystemClient = dataLakeServiceClient.createFileSystem(fileSystemName);
+dataLakeFileSystemClient.createDirectory("a/b/w");
System.out.println("Directory created."); ```
The following snippet creates a file (c.txt) in the directory structure and writ
```java // create file and write some content
-String filename = "/a/b/c.txt";
-OutputStream stream = client.createFile(filename, IfExists.OVERWRITE );
-PrintStream out = new PrintStream(stream);
-for (int i = 1; i <= 10; i++) {
- out.println("This is line #" + i);
- out.format("This is the same line (%d), but using formatted output. %n", i);
+String filename = "c.txt";
+try (FileOutputStream stream = new FileOutputStream(filename);
+ PrintWriter out = new PrintWriter(stream)) {
+ for (int i = 1; i <= 10; i++) {
+ out.println("This is line #" + i);
+ out.format("This is the same line (%d), but using formatted output. %n", i);
+ }
}
-out.close();
+dataLakeFileSystemClient.createFile("a/b/" + filename, true);
System.out.println("File created."); ```
You can also create a file (d.txt) using byte arrays.
```java // create file using byte arrays
-stream = client.createFile("/a/b/d.txt", IfExists.OVERWRITE);
+DataLakeFileClient dataLakeFileClient = dataLakeFileSystemClient.createFile("a/b/d.txt", true);
byte[] buf = getSampleContent();
-stream.write(buf);
-stream.close();
+try (ByteArrayInputStream stream = new ByteArrayInputStream(buf)) {
+ dataLakeFileClient.upload(stream, buf.length);
+}
System.out.println("File created using byte array."); ```
The following snippet appends content to an existing file.
```java // append to file
-stream = client.getAppendStream(filename);
-stream.write(getSampleContent());
-stream.close();
-System.out.println("File appended.");
+byte[] buf = getSampleContent();
+try (ByteArrayInputStream stream = new ByteArrayInputStream(buf)) {
+ DataLakeFileClient dataLakeFileClient = dataLakeDirectoryClient.getFileClient(filename);
+ dataLakeFileClient.append(stream, 0, buf.length);
+ System.out.println("File appended.");
+}
``` The definition for `getSampleContent` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/).
The following snippet reads content from a file in a Data Lake Storage Gen1 acco
```java // Read File
-InputStream in = client.getReadStream(filename);
-BufferedReader reader = new BufferedReader(new InputStreamReader(in));
-String line;
-while ( (line = reader.readLine()) != null) {
+try (InputStream dataLakeIn = dataLakeFileSystemClient.getFileClient(filename).openInputStream().getInputStream();
+ BufferedReader reader = new BufferedReader(new InputStreamReader(dataLakeIn))) {
+ String line;
+ while ( (line = reader.readLine()) != null) {
System.out.println(line); } reader.close();
System.out.println("File contents read.");
## Concatenate files
-The following snippet concatenates two files in a Data Lake Storage Gen1 account. If successful, the concatenated file replaces the two existing files.
+The following snippet concatenates two files in a Data Lake Storage Gen2 account. If successful, the concatenated file replaces the two existing files.
```java // concatenate the two files into one
+dataLakeFileClient = dataLakeDirectoryClient.createFile("/a/b/f.txt", true);
List<String> fileList = Arrays.asList("/a/b/c.txt", "/a/b/d.txt");
-client.concatenateFiles("/a/b/f.txt", fileList);
+fileList.stream().forEach(filename -> {
+ File concatenateFile = new File(filename);
+ try (InputStream fileIn = new FileInputStream(concatenateFile)) {
+ dataLakeFileClient.append(fileIn, 0, concatenateFile.length());
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+});
System.out.println("Two files concatenated into a new file."); ```
The following snippet renames a file in a Data Lake Storage Gen1 account.
```java //rename the file
-client.rename("/a/b/f.txt", "/a/b/g.txt");
+dataLakeFileSystemClient.getFileClient("a/b/f.txt").rename(dataLakeFileSystemClient.getFileSystemName(), "a/b/g.txt");
System.out.println("New file renamed."); ```
The following snippet retrieves the metadata for a file in a Data Lake Storage G
```java // get file metadata
-DirectoryEntry ent = client.getDirectoryEntry(filename);
-printDirectoryInfo(ent);
+Map<String, String> metaData = dataLakeFileSystemClient.getFileClient(filename).getProperties().getMetadata();
+printDirectoryInfo(metaData);
System.out.println("File metadata retrieved."); ```
The following snippet sets permissions on the file that you created in the previ
```java // set file permission
-client.setPermission(filename, "744");
+PathAccessControl pathAccessControl = dataLakeFileSystemClient.getFileClient(filename).getAccessControl();
+dataLakeFileSystemClient.getFileClient(filename).setPermissions(PathPermissions.parseOctal("744"), pathAccessControl.getGroup(), pathAccessControl.getOwner());
System.out.println("File permission set."); ```
The following snippet lists the contents of a directory, recursively.
```java // list directory contents
-List<DirectoryEntry> list = client.enumerateDirectory("/a/b", 2000);
-System.out.println("Directory listing for directory /a/b:");
-for (DirectoryEntry entry : list) {
- printDirectoryInfo(entry);
-}
+dataLakeFileSystemClient.listPaths(new ListPathsOptions().setPath("a/b"), Duration.ofSeconds(2000)).forEach(path -> {
+ printDirectoryInfo(dataLakeFileSystemClient.getDirectoryClient(path.getName()).getProperties().getMetadata());
+});
System.out.println("Directory contents listed."); ```
The following snippet deletes the specified files and folders in a Data Lake Sto
```java // delete directory along with all the subdirectories and files in it
-client.deleteRecursive("/a");
+dataLakeFileSystemClient.deleteDirectory("a");
System.out.println("All files and folders deleted recursively"); promptEnterKey(); ```
promptEnterKey();
2. To produce a standalone jar that you can run from command-line build the jar with all dependencies included, using the [Maven assembly plugin](https://maven.apache.org/plugins/maven-assembly-plugin/usage.html). The pom.xml in the [example source code on GitHub](https://github.com/Azure-Samples/data-lake-store-java-upload-download-get-started/blob/master/pom.xml) has an example. ## Next steps
-* [Explore JavaDoc for the Java SDK](https://azure.github.io/azure-data-lake-store-java/javadoc/)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
+* [Explore JavaDoc for the Java SDK](https://azure.github.io/azure-sdk-for-java/datalakestorage%28gen2%29.html)
+* [Secure data in Data Lake Storage Gen2](data-lake-store-secure-data.md)
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
With the integrated Azure Workbooks functionality, Microsoft Defender for Cloud
- ['Vulnerability Assessment Findings' workbook](#use-the-vulnerability-assessment-findings-workbook) - View the findings of vulnerability scans of your Azure resources - ['Compliance Over Time' workbook](#use-the-compliance-over-time-workbook) - View the status of a subscription's compliance with the regulatory or industry standards you've selected Choose one of the supplied workbooks or create your own.
defender-for-iot How To Region Move https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-region-move.md
+
+ Title: Move an "iotsecuritysolutions" resource to another region by using the Azure portal
+description: Move an "iotsecuritysolutions" resource from one Azure region to another by using the Azure portal.
++ Last updated : 01/04/2022++
+# Move an "iotsecuritysolutions" resource to another region by using the Azure portal
+
+There are various scenarios for moving an existing resource from one region to another. For example, you might want to take advantage of features, and services that are only available in specific regions, to meet internal policy and governance requirements, or in response to capacity planning requirements.
+
+You can move a Microsoft Defender for IoT "iotsecuritysolutions" resource to a different Azure region. The "iotsecuritysolutions" resource is a hidden resource that is connected to a specific IoT hub resource that is used to enable security on the hub. Learn how to [configure, and create](/azure/templates/microsoft.security/iotsecuritysolutions?tabs=bicep) this resource.
+
+## Prerequisites
+
+- Make sure that the resource is in the Azure region that you want to move from.
+
+- An existing "iotsecuritysolutions" resource.
+
+- Make sure that your Azure subscription allows you to create "iotsecuritysolutions" resources in the target region.
+
+- Make sure that your subscription has enough resources to support the addition of resources for this process. For more information, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits)
+
+## Prepare
+
+In this section, you will prepare to move the resource for the move by finding the resource and confirming it is in a region you wish to move from.
+
+Before transitioning the resource to the new region, we recommended using [log analytics](../../azure-monitor/logs/quick-create-workspace.md) to store alerts, and raw events.
+
+**To find the resource you want to move**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and then select **All Resources**.
+
+1. Select **Show hidden types**.
+
+ :::image type="content" source="media/region-move/hidden-resources.png" alt-text="Screenshot showing where the Show hidden resources checkbox is located.":::
+
+1. Select the **Type** filter, and enter `iotsecuritysolutions` in the search field.
+
+ :::image type="content" source="media/region-move/filter-type.png" alt-text="Screenshot showing you how to filter by type.":::
+
+1. Select **Apply**.
+
+1. Select your hub from the list.
+
+1. Ensure that you have selected the correct hub, and that it is in the region you want to move it from.
+
+ :::image type="content" source="media/region-move/location.png" alt-text="Screenshot showing you the region your hub is located in.":::
+
+## Move
+
+You are now ready to move your resource to your new location. Follow [these instructions](/azure/iot-hub/iot-hub-how-to-clone) to move your IoT Hub.
+
+After transferring, and enabling the resource, you can link to the same log analytics workspace that was configured earlier.
+
+## Verify
+
+In this section, you will verify that the resource has been moved, that the connection to the IoT Hub has been enabled, and that everything is working correctly.
+
+**To verify the resource in in the correct region**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and then select **All Resources**.
+
+1. Select **Show hidden types**.
+
+ :::image type="content" source="media/region-move/hidden-resources.png" alt-text="Screenshot showing where the Show hidden resources checkbox is located.":::
+
+1. Select the **Type** filter, and enter `iotsecuritysolutions` in the search field.
+
+1. Select **Apply**.
+
+1. Select your hub from the list.
+
+1. Ensure that the region has been changed.
+
+ :::image type="content" source="media/region-move/location-changed.png" alt-text="Screenshot that shows you the region your hub is located in.":::
+
+**To ensure everything is working correctly**:
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT**, and select Recommendations.
+
+ :::image type="content" source="media/region-move/recommendations.png" alt-text="Screenshot showing you where to go to see recommendations.":::
+
+The recommendations should have transferred and everything should be working correctly.
+
+## Clean up source resources
+
+DonΓÇÖt clean up until you have finished verifying that the resource has moved, and the recommendations have transferred. When you're ready, clean up the old resources by performing these steps:
+
+- If you haven't already, delete the old hub. This removes all of the active devices from the hub.
+
+- If you have routing resources that you moved to the new location, you can delete the old routing resources.
+
+## Next steps
+
+In this tutorial, you moved an Azure resource from one region to another and cleaned up the source resource.
+
+- Learn more about [Moving your resources to a new resource group or subscription.](/azure/azure-resource-manager/management/move-resource-group-and-subscription).
+
+- Learn how to [move VMs to another Azure region](/azure/site-recovery/azure-to-azure-tutorial-migrate).
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-azure-digital-twins-explorer.md
Developers may find this tool especially useful in the following scenarios:
The explorer's main purpose is to help you visualize and understand your graph, and update your graph as needed. For large-scale solutions and for work that should be repeated or automated, consider using the [APIs and SDKs](./concepts-apis-sdks.md) to interact with your instance through code instead.
+## How to access
+
+The main way to access Azure Digital Twins Explorer is through the [Azure portal](https://portal.azure.com).
+
+To open Azure Digital Twins Explorer for an Azure Digital Twins instance, first navigate to the instance in the portal, by searching for its name in the portal search bar.
+ [!INCLUDE [digital-twins-access-explorer.md](../../includes/digital-twins-access-explorer.md)] ## Features and capabilities
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-ingress-egress.md
Endpoints are attached to Azure Digital Twins using management APIs or the Azure
There are many other services where you may want to ultimately direct your data, such as [Azure Storage](../storage/common/storage-introduction.md), [Azure Maps](../azure-maps/about-azure-maps.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). To send your data to services like these, attach the destination service to an endpoint.
-For example, if you're also using Azure Maps and want to correlate location with your Azure Digital Twins [twin graph](concepts-twins-graph.md), you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. For more information on integrating Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md)
+For example, if you're also using Azure Maps and want to correlate location with your Azure Digital Twins graph, you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. For more information on integrating Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md). For information on routing data in a similar way to Time Series Insights, see [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
-You can also learn how to route data in a similar way to Time Series Insights, in [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
+Azure Digital Twins implements **at least once** delivery for data emitted to egress services.
## Next steps
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
Event routes are used for both of these scenarios.
An event route lets you send event data from digital twins in Azure Digital Twins to custom-defined endpoints in your subscriptions. Three Azure services are currently supported for endpoints: [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). Each of these Azure services can be connected to other services and acts as the middleman, sending data along to final destinations such as TSI or Azure Maps for whatever processing you need.
+Azure Digital Twins implements **at least once** delivery for data emitted to egress services.
+ The following diagram illustrates the flow of event data through a larger IoT solution with an Azure Digital Twins aspect: :::image type="content" source="media/concepts-route-events/routing-workflow.png" alt-text="Diagram of Azure Digital Twins routing data through endpoints to several downstream services." border="false":::
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
>[!NOTE] >This tool is currently in **public preview**.
+## How to access
+
+The main way to access Azure Digital Twins Explorer is through the [Azure portal](https://portal.azure.com).
+
+To open Azure Digital Twins Explorer for an Azure Digital Twins instance, first navigate to the instance in the portal, by searching for its name in the portal search bar.
+ [!INCLUDE [digital-twins-access-explorer.md](../../includes/digital-twins-access-explorer.md)] ### Switch contexts within the app
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
You'll need an Azure subscription to complete this quickstart. If you don't have
You'll also need to download the materials for the sample graph used in the quickstart. Use the links and instructions below to download the three required files from the [digital-twins-explorer GitHub repository](https://github.com/Azure-Samples/digital-twins-explorer). Later, you'll follow more instructions to upload them to Azure Digital Twins. * [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Room.json): This is a model file representing a room in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file somewhere on your machine with the name **Room.json**.
-* [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Floor.json): This is a model file representing a floor in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file to the same location as **Room.json**, under the name **Floor.json**.
-* [buildingScenario.xlsx](https://github.com/Azure-Samples/digital-twins-explorer/blob/main/client/examples/buildingScenario.xlsx): This file contains a graph of room and floor twins, and relationships between them. Navigate to the link and select the **Download** button. This will download the file to your default download location.
+* [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Floor.json): This is a model file representing a floor in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file to the same location as Room.json, under the name **Floor.json**.
+* [buildingScenario.xlsx](https://github.com/Azure-Samples/digital-twins-explorer/raw/main/client/examples/buildingScenario.xlsx): This file contains a graph of room and floor twins, and relationships between them. Depending on your browser settings, selecting this link may download the **buildingScenario.xlsx** file automatically to your default download location, or it may open the file in your browser with an option to download. Here is what that download option looks like in Microsoft Edge:
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png" alt-text="Screenshot of the digital-twins-explorer/client/examples/buildingScenario.xlsx file in GitHub. The Download button is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png" alt-text="Screenshot of the buildingScenario.xlsx file viewed in a Microsoft Edge browser. A button saying Download is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png":::
## Set up Azure Digital Twins
When the instance is finished deploying, use the **Go to resource** button to na
:::image type="content" source= "media/quickstart-azure-digital-twins-explorer/deployment-complete.png" alt-text="Screenshot of the deployment page for Azure Digital Twins in the Azure portal. The page indicates that deployment is complete.":::
-Next, select the **Open Azure Digital Twins Explorer (preview)** button.
--
-This will open an Azure Digital Twins Explorer window connected to your instance.
- ## Upload the sample materials
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
-## Create an Azure Database Migration Service instance
-1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
-
- ![Azure Marketplace](media/tutorial-sql-server-to-managed-instance-online/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-managed-instance-online/dms-create-service-1.png)
-
-3. On the **Create Migration Service** basics screen:
-
- - Select the subscription.
- - Create a new resource group or choose an existing one.
- - Specify a name for the instance of the Azure Database Migration Service.
- - Select the location in which you want to create the instance of Azure Database Migration Service.
- - Choose **Azure** as the service mode.
- - Select an SKU from the Premium pricing tier.
-
- > [!NOTE]
- > Online migrations are supported only when using the Premium tier.
-
- - For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service instance basics settings](media/tutorial-sql-server-to-managed-instance-online/dms-create-service-2.png)
-
- - Select **Next: Networking**.
-
-4. On the **Create Migration Service** networking screen:
-
- - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Managed Instance.
-
- - For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-
- - For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
- ![Configure Azure Database Migration Service instance networking settings](media/tutorial-sql-server-to-managed-instance-online/dms-create-service-3.png)
-
- - Select **Review + Create** to review the details and then select **Create** to create the service.
+> [!NOTE]
+> For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
## Create a migration project
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
To migrate the **AdventureWorks2016** schema to a single database or pooled data
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
-## Create an Azure Database Migration Service instance
-
-1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
-
- ![Azure Marketplace](media/tutorial-sql-server-to-azure-sql/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-azure-sql/dms-create-1.png)
-
-3. On the **Create Migration Service** basics screen:
-
- - Select the subscription.
- - Create a new resource group or choose an existing one.
- - Specify a name for the instance of the Azure Database Migration Service.
- - Select the location in which you want to create the instance of Azure Database Migration Service.
- - Choose **Azure** as the service mode.
- - Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service instance basics settings](media/tutorial-sql-server-to-azure-sql/dms-settings-2.png)
-
- - Select **Next: Networking**.
-
-4. On the **Create Migration Service** networking screen:
-
- - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance. For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-
- ![Configure Azure Database Migration Service instance networking settings](media/tutorial-sql-server-to-azure-sql/dms-settings-3.png)
-
- - Select **Review + Create** to review the details and then select **Create** to create the service.
## Create a migration project
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
To complete this tutorial, you need to:
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
-## Create an Azure Database Migration Service instance
-
-1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
-
- ![Azure Marketplace](media/tutorial-sql-server-to-managed-instance/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-managed-instance/dms-create-service-1.png)
-
-3. On the **Create Migration Service** basics screen:
-
- - Select the subscription.
- - Create a new resource group or choose an existing one.
- - Specify a name for the instance of the Azure Database Migration Service.
- - Select the location in which you want to create the instance of Azure Database Migration Service.
- - Choose **Azure** as the service mode.
- - Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service instance basics settings](media/tutorial-sql-server-to-managed-instance/dms-create-service-2.png)
-
- - Select **Next: Networking**.
-
-4. On the **Create Migration Service** networking screen:
-
- - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Managed Instance.
-
- - For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- - For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
- ![Configure Azure Database Migration Service instance networking settings](media/tutorial-sql-server-to-managed-instance/dms-create-service-3.png)
-
- - Select **Review + Create** to review the details and then select **Create** to create the service.
+> [!NOTE]
+> For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
## Create a migration project
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-filtering.md
The JSON syntax for filtering by subject is:
```json "filter": {
- "subjectBeginsWith": "/blobServices/default/containers/mycontainer/log",
+ "subjectBeginsWith": "/blobServices/default/containers/mycontainer/blobs/log",
"subjectEndsWith": ".jpg" }
Key is the field in the event data that you're using for filtering. It can be on
```json "filter": {
- "subjectBeginsWith": "/blobServices/default/containers/mycontainer/log",
+ "subjectBeginsWith": "/blobServices/default/containers/mycontainer/blobs/log",
"subjectEndsWith": ".jpg", "enableAdvancedFilteringOnArrays": true }
event-hubs Apache Kafka Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/apache-kafka-migration-guide.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Follow step-by-step instructions in the [Create an event hub](event-hubs-create.md) article to create an Event Hubs namespace and an event hub. ### Connection string
-Follow steps from the [Get connection string from the portal](event-hubs-get-connection-string.md#get-connection-string-from-the-portal) article. And, note down the connection string for later use.
+Follow steps from the [Get connection string from the portal](event-hubs-get-connection-string.md#azure-portal) article. And, note down the connection string for later use.
### Fully qualified domain name (FQDN) You may also need the FQDN that points to your Event Hub namespace. The FQDN can be found within your connection string as follows:
event-hubs Event Hubs C Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-c-getstarted-send.md
To complete this tutorial, you need the following:
* A C development environment. This tutorial assumes the gcc stack on an Azure Linux VM with Ubuntu 14.04. * [Microsoft Visual Studio](https://www.visualstudio.com/).
-* **Create an Event Hubs namespace and an event hub**. Use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Get the value of access key for the event hub by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the access key in the code you write later in this tutorial. The default key name is: **RootManageSharedAccessKey**.
+* **Create an Event Hubs namespace and an event hub**. Use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Get the value of access key for the event hub by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the access key in the code you write later in this tutorial. The default key name is: **RootManageSharedAccessKey**.
## Write code to send messages to Event Hubs In this section shows how to write a C app to send events to your event hub. The code uses the Proton AMQP library from the [Apache Qpid project](https://qpid.apache.org/). This is analogous to using Service Bus queues and topics with AMQP from C as shown [in this sample](https://code.msdn.microsoft.com/Using-Apache-Qpid-Proton-C-afd76504). For more information, see the [Qpid Proton documentation](https://qpid.apache.org/proton/https://docsupdatetracker.net/index.html).
event-hubs Event Hubs Capture Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-python.md
In this quickstart, you:
- avro-python3 1.10.1 - An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. - An active Event Hubs namespace and event hub.
-[Create an Event Hubs namespace and an event hub in the namespace](event-hubs-create.md). Record the name of the Event Hubs namespace, the name of the event hub, and the primary access key for the namespace. To get the access key, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). The default key name is *RootManageSharedAccessKey*. For this quickstart, you need only the primary key. You don't need the connection string.
+[Create an Event Hubs namespace and an event hub in the namespace](event-hubs-create.md). Record the name of the Event Hubs namespace, the name of the event hub, and the primary access key for the namespace. To get the access key, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md#azure-portal). The default key name is *RootManageSharedAccessKey*. For this quickstart, you need only the primary key. You don't need the connection string.
- An Azure storage account, a blob container in the storage account, and a connection string to the storage account. If you don't have these items, do the following: 1. [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) 1. [Create a blob container in the storage account](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
To complete this quickstart, you need the following prerequisites:
- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com). - **Microsoft Visual Studio 2019**. The Azure Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, it is recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the connection string later in this quickstart.
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.
## Send events This section shows you how to create a .NET Core console application to send events to an event hub.
event-hubs Event Hubs Get Connection String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-get-connection-string.md
Title: Get connection string - Azure Event Hubs | Microsoft Docs description: This article provides instructions for getting a connection string that clients can use to connect to Azure Event Hubs. Previously updated : 07/23/2021 Last updated : 01/03/2022 # Get an Event Hubs connection string
+To communicate with an event hub in a namespace, you need a connection string for the namespace or the event hub. If you use a connection string to the namespace from your application, the application will have the provided access (manage, read, or write) to all event hubs in the namespace. If you use a connection string to the event hub, you will have the provided access to that specific event hub.
-To use Event Hubs, you need to create an Event Hubs namespace. A namespace is a scoping container for multiple event hubs or Kafka topics. This namespace gives you a unique [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name). Once a namespace is created, you can obtain the connection string required to communicate with Event Hubs.
+The connection string for a namespace has the following components embedded within it,
-The connection string for Azure Event Hubs has the following components embedded within it,
-
-* FQDN = the FQDN of the EventHubs namespace you created (it includes the EventHubs namespace name followed by servicebus.windows.net)
+* FQDN = the FQDN of the Event Hubs namespace you created (it includes the Event Hubs namespace name followed by servicebus.windows.net)
* SharedAccessKeyName = the name you chose for your application's SAS keys * SharedAccessKey = the generated value of the key.
-The connection string template looks like
+The connection string for a namespace looks like:
+
+```
+Endpoint=sb://<NamespaceName>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>
+```
+
+The connection string for an event hub has an additional component in it. That's, `EntityPath=<EventHubName>`.
+ ```
-Endpoint=sb://<FQDN>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>
+Endpoint=sb://<NamespaceName>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>;EntityPath=<EventHubName>
```
-An example connection string might look like
-`Endpoint=sb://dummynamespace.servicebus.windows.net/;SharedAccessKeyName=DummyAccessKeyName;SharedAccessKey=5dOntTRytoC24opYThisAsit3is2B+OGY1US/fuL3ly=`
+This article shows you how to get a connection string to a namespace or a specific event hub by using the Azure portal, PowerShell, or CLI.
-This article walks you through various ways of obtaining the connection string.
+## Azure portal
+
+### Connection string for a namespace
-## Get connection string from the portal
1. Sign in to [Azure portal](https://portal.azure.com). 2. Select **All services** on the left navigational menu. 3. Select **Event Hubs** in the **Analytics** section. 4. In the list of event hubs, select your event hub. 6. On the **Event Hubs Namespace** page, select **Shared Access Policies** on the left menu.
-7. Select a **shared access policy** in the list of policies. The default one is named: **RootManageSharedAccessPolicy**. You can add a policy with appropriate permissions (read, write), and use that policy.
+7. Select a **shared access policy** in the list of policies. The default one is named: **RootManageSharedAccessPolicy**. You can add a policy with appropriate permissions (send, listen), and use that policy.
:::image type="content" source="./media/event-hubs-get-connection-string/event-hubs-get-connection-string2.png" alt-text="Event Hubs shared access policies"::: 8. Select the **copy** button next to the **Connection string-primary key** field. :::image type="content" source="./media/event-hubs-get-connection-string/event-hubs-get-connection-string3.png" alt-text="Event Hubs - get connection string":::
-## Getting the connection string with Azure PowerShell
+### Connection string for a specific event hub in a namespace
+This section gives you steps for getting a connection string to a specific event hub in a namespace.
+
+1. On the **Event Hubs Namespace** page, select the event hub in the bottom pane.
+1. On the **Event Hubs instance** page, select **Shared access policies** on the left menu.
+1. There's no default policy created for an event hub. Create a policy with **Manage**, **Send, or **Listen** access.
+1. Select the policy from the list.
+1. Select the **copy** button next to the **Connection string-primary key** field.
+
+ :::image type="content" source="./media/event-hubs-get-connection-string/connection-string-event-hub.png" alt-text="Connection string to a specific event hub.":::
+
+## Azure PowerShell
+
+You can use the [Get-AzEventHubKey](/powershell/module/az.eventhub/get-azeventhubkey) to get the connection string for the specific policy/rule.
+Here's a sample command to get the connection string for a namespace. `MyAuthRuleName` is the name of the shared access policy. For a namespace, there's a default one: `RootManageSharedAccessKey`.
-You can use the [Get-AzEventHubKey](/powershell/module/az.eventhub/get-azeventhubkey) to get the connection string for the specific policy/rule name as shown below:
+```azurepowershell-interactive
+Get-AzEventHubKey -ResourceGroupName MyResourceGroupName -NamespaceName MyNamespaceName -AuthorizationRuleName MyAuthRuleName
+```
+
+Here's a sample command to get the connection string for a specific event hub within a namespace:
```azurepowershell-interactive
-Get-AzEventHubKey -ResourceGroupName MYRESOURCEGROUP -NamespaceName MYEHUBNAMESPACE -AuthorizationRuleName RootManageSharedAccessKey
+Get-AzEventHubKey -ResourceGroupName MyResourceGroupName -NamespaceName MyNamespaceName -EventHubName MyEventHubName -AuthorizationRuleName MyAuthRuleName
+```
+
+Here's a sample command to get the connection string for an event hub in a Geo-DR cluster, which has an alias.
+
+```azurepowershell-interactive
+Get-AzEventHubKey -ResourceGroupName MyResourceGroupName -NamespaceName MyNamespaceName -EventHubName MyEventHubName -AliasName MyAliasName -Name MyAuthRuleName
+```
+
+## Azure CLI
+Here's a sample command to get the connection string for a namespace. `MyAuthRuleName` is the name of the shared access policy. For a namespace, there's a default one: `RootManageSharedAccessKey`
+
+```azurecli-interactive
+az eventhubs namespace authorization-rule keys list --resource-group MyResourceGroupName --namespace-name MyNamespaceName --name RootManageSharedAccessKey
```
-## Getting the connection string with Azure CLI
-You can use the following to get the connection string for the namespace:
+Here's a sample command to get the connection string for a specific event hub within a namespace:
```azurecli-interactive
-az eventhubs namespace authorization-rule keys list --resource-group MYRESOURCEGROUP --namespace-name MYEHUBNAMESPACE --name RootManageSharedAccessKey
+az eventhubs eventhub authorization-rule keys list --resource-group MyResourceGroupName --namespace-name MyNamespaceName --eventhub-name MyEventHubName --name MyAuthRuleName
```
-Or you can use the following to get the connection string for an EventHub entity:
+Here's a sample command to get the connection string for an event hub in a Geo-DR cluster, which has an alias.
```azurecli-interactive
-az eventhubs eventhub authorization-rule keys list --resource-group MYRESOURCEGROUP --namespace-name MYEHUBNAMESPACE --eventhub-name MYEHUB --name RootManageSharedAccessKey
+az eventhubs georecovery-alias authorization-rule keys list --resource-group MyResourceGroupName --namespace-name MyNamespaceName --eventhub-name MyEventHubName --alias-name MyAliasName --name MyAuthRuleName
``` For more information about Azure CLI commands for Event Hubs, see [Azure CLI for Event Hubs](/cli/azure/eventhubs).
For more information about Azure CLI commands for Event Hubs, see [Azure CLI for
You can learn more about Event Hubs by visiting the following links: * [Event Hubs overview](./event-hubs-about.md)
-* [Create an Event Hub](event-hubs-create.md)
+* [Create an event hub](event-hubs-create.md)
event-hubs Event Hubs Java Get Started Send Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-java-get-started-send-legacy.md
To complete this quickstart, you need the following prerequisites:
- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com). - A Java development environment. This quickstart uses [Eclipse](https://www.eclipse.org/).-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the value of access key for the event hub by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the access key in the code you write later in this quickstart. The default key name is: **RootManageSharedAccessKey**.
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the value of access key for the event hub by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the access key in the code you write later in this quickstart. The default key name is: **RootManageSharedAccessKey**.
## Send events This section shows you how to create a Java application to send events an event hub.
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-java-get-started-send.md
To complete this quickstart, you need the following prerequisites:
- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com). - A Java development environment. This quickstart uses [Eclipse](https://www.eclipse.org/). Java Development Kit (JDK) with version 8 or above is required.-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the connection string later in this quickstart.
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.
## Send events
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-node-get-started-send.md
To complete this quickstart, you need the following prerequisites:
1. In the [Azure portal](https://portal.azure.com), create a namespace of type *Event Hubs*, and then obtain the management credentials that your application needs to communicate with the event hub. 1. To create the namespace and event hub, follow the instructions at [Quickstart: Create an event hub by using the Azure portal](event-hubs-create.md). 1. Continue by following the instructions in this quickstart.
- 1. To get the connection string for your Event Hub namespace, follow the instructions in [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). Record the connection string to use later in this quickstart.
-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the connection string later in this quickstart.
+ 1. To get the connection string for your Event Hub namespace, follow the instructions in [Get connection string](event-hubs-get-connection-string.md#azure-portal). Record the connection string to use later in this quickstart.
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.
### Install the npm package To install the [Node Package Manager (npm) package for Event Hubs](https://www.npmjs.com/package/@azure/event-hubs), open a command prompt that has *npm* in its path, change the directory
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-python-get-started-send.md
To complete this quickstart, you need the following prerequisites:
```cmd pip install azure-eventhub-checkpointstoreblob-aio ```-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the connection string later in this quickstart.
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.
## Send events In this section, you create a Python script to send events to the event hub that you created earlier.
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-set-global-reach-portal.md
Enable connectivity between your on-premises networks. There are separate sets o
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png" alt-text="Enable global reach from private peering":::
-1. On the *Add Global Reach* configuration page, give a name to this configuration. Select the *ExpressRoute circuit* you want to connect this circuit to and enter in a **/29 IPv4** for the *Global Reach subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, or in your on-premises network. Select **Add** to add the circuit to the private peering configuration.
+1. On the *Add Global Reach* configuration page, give a name to this configuration. Select the *ExpressRoute circuit* you want to connect this circuit to and enter in a **/29 IPv4** for the *Global Reach subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, private peering subnet, or on-premises network. Select **Add** to add the circuit to the private peering configuration.
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png" alt-text="Screenshot of adding Global Reach in private peering.":::
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Previously updated : 12/09/2021 Last updated : 01/04/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall has the following known issues:
|NAT rules with ports between 64000 and 65535 are unsupported|Azure Firewall allows any port in the 1-65535 range in network and application rules, however NAT rules only support ports in the 1-63999 range.|This is a current limitation. |Configuration updates may take five minutes on average|An Azure Firewall configuration update can take three to five minutes on average, and parallel updates aren't supported.|A fix is being investigated.| |Azure Firewall uses SNI TLS headers to filter HTTPS and MSSQL traffic|If browser or server software doesn't support the Server Name Indicator (SNI) extension, you can't connect through Azure Firewall.|If browser or server software doesn't support SNI, then you may be able to control the connection using a network rule instead of an application rule. See [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication) for software that supports SNI.|
-|Custom DNS doesn't work with forced tunneling|If force tunneling is enabled, custom DNS doesn't work.|A fix is being investigated.|
|Start/Stop doesnΓÇÖt work with a firewall configured in forced-tunnel mode|Start/stop doesnΓÇÖt work with Azure firewall configured in forced-tunnel mode. Attempting to start Azure Firewall with forced tunneling configured results in the following error:<br><br>*Set-AzFirewall: AzureFirewall FW-xx management IP configuration cannot be added to an existing firewall. Redeploy with a management IP configuration if you want to use forced tunneling support.<br>StatusCode: 400<br>ReasonPhrase: Bad Request*|Under investigation.<br><br>As a workaround, you can delete the existing firewall and create a new one with the same parameters.| |Can't add firewall policy tags using the portal or Azure Resource Manager (ARM) templates|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal or ARM templates. The following error is generated: *Could not save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.| |IPv6 not currently supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.|
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-custom-domain-https.md
Grant Azure Front Door permission to access the certificates in your Azure Key
4. In **Certificate permissions**, select **Get** to allow Front Door to retrieve the certificate.
-5. Select **OK**.
+5. Select **Add**.
- Azure Front Door can now access this Key Vault and the certificates that are stored in this Key Vault.
+6. On the **Access policies** page, select **Save**.
+
+Azure Front Door can now access this Key Vault and the certificates that are stored in this Key Vault.
#### Select the certificate for Azure Front Door to deploy
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/troubleshoot-issues.md
The cause of this problem can be one of three things:
:::image type="content" source="./media/troubleshoot-issues/remove-encoding-rule.png" alt-text="Screenshot of accept-encoding rule in a Rule Set.":::
+## 503 responses from Azure Front Door only for HTTPS
+
+### Symptom
+
+* 503 responses are returned only for AFD HTTPS enabled endpoints
+* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses.
+* Intermittent 503 errors with log `ErrorInfo: OriginInvalidResponse`
+
+### Cause
+The cause of this problem can be one of three things:
+* Backend Pool is an IP address
+* Backend Server is returning a certificate that does not match the FQDN of the AFD backend Pool
+* Backend Pool is an Azure Web Apps server
+
+### Troubleshooting steps
+
+* Backend Pool is an IP address
+
+ `EnforceCertificateNameCheck` must be disabled.
+
+ AFD has a switch called "enforceCertificateNameCheck". By default, this setting is enabled. When enabled, AFD checks that the backend pool host name FQDN matches the backend server certificate's Certificate Name (CN) or one of the entries in the Subject Alternative Names (SAN) extension.
+
+ How to disable EnforceCertifiateNameCheck from Portal:
+
+ In the portal there is a toggle button, that will allow you to turn this on/off in the Azure Front Door Design Blade.
+
+ ![image](https://user-images.githubusercontent.com/63200992/148067710-1b9b6053-efe3-45eb-859f-f747de300653.png)
+
+* Backend Server is returning a certificate that does not match the FQDN of the AFD backend Pool
+
+ - To resolve we will either need the certificate returned to match the FQDN (or)
+
+ - The EnforceCertificateNameCheck must be disabled
+
+* Backend Pool is an Azure Web Apps server
+
+ - Check if Azure web app is configured with Ip Based SSL instead of SNI based. If itΓÇÖs configured as IpBased then this should be changed to SNI.
+
+ - If the backend is unhealthy due to a certificate failure, we will return a 503. You can verify the health of the backends on port 80 and 443. If only 443 is unhealthy, this is likely an issue with SSL. Since the backend is configured to use the FQDN, we know itΓÇÖs sending SNI.
+
+ Using OPENSSL, verify the certificate that is being returned. To do this, connect to the backend using "-servername" and it should return the SNI which needs to match with the FQDN of the backend pool.
+
+ _openssl s_client -connect backendvm.contoso.com:443 -servername backendvm.contoso.com_
+
## Requests sent to the custom domain return a 400 status code ### Symptom
hdinsight Apache Spark Python Package Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-python-package-installation.md
There are two types of open-source components that are available in the HDInsigh
## Understand default Python installation
-HDInsight Spark cluster is created with Anaconda installation. There are two Python installations in the cluster, Anaconda Python 2.7 and Python 3.5. The table below shows the default Python settings for Spark, Livy, and Jupyter.
+HDInsight Spark clusters have Anaconda installed. There are two Python installations in the cluster, Anaconda Python 2.7 and Python 3.5. The table below shows the default Python settings for Spark, Livy, and Jupyter.
|Setting |Python 2.7|Python 3.5| |-|-|-|
HDInsight Spark cluster is created with Anaconda installation. There are two Pyt
|Livy version|Default set to 2.7|Can change config to 3.5| |Jupyter|PySpark kernel|PySpark3 kernel|
+For the Spark 3.1.2 version, the Apache PySpark kernel is removed and a new Python 3.8 environment is installed under `/usr/bin/miniforge/envs/py38/bin` which is used by the PySpark3 kernel. The `PYSPARK_PYTHON` and `PYSPARK3_PYTHON` environment variables are updated with the following:
+
+```bash
+export PYSPARK_PYTHON=${PYSPARK_PYTHON:-/usr/bin/miniforge/envs/py38/bin/python}
+export PYSPARK3_PYTHON=${PYSPARK_PYTHON:-/usr/bin/miniforge/envs/py38/bin/python}
+```
+ ## Safely install external Python packages HDInsight cluster depends on the built-in Python environment, both Python 2.7 and Python 3.5. Directly installing custom packages in those default built-in environments may cause unexpected library version changes. And break the cluster further. To safely install custom external Python packages for your Spark applications, follow below steps. 1. Create Python virtual environment using conda. A virtual environment provides an isolated space for your projects without breaking others. When creating the Python virtual environment, you can specify python version that you want to use. You still need to create virtual environment even though you would like to use Python 2.7 and 3.5. This requirement is to make sure the cluster's default environment not getting broke. Run script actions on your cluster for all nodes with below script to create a Python virtual environment.
- - `--prefix` specifies a path where a conda virtual environment lives. There are several configs that need to be changed further based on the path specified here. In this example, we use the py35new, as the cluster has an existing virtual environment called py35 already.
- - `python=` specifies the Python version for the virtual environment. In this example, we use version 3.5, the same version as the cluster built in one. You can also use other Python versions to create the virtual environment.
- - `anaconda` specifies the package_spec as anaconda to install Anaconda packages in the virtual environment.
-
+ - `--prefix` specifies a path where a conda virtual environment lives. There are several configs that need to be changed further based on the path specified here. In this example, we use the py35new, as the cluster has an existing virtual environment called py35 already.
+ - `python=` specifies the Python version for the virtual environment. In this example, we use version 3.5, the same version as the cluster built in one. You can also use other Python versions to create the virtual environment.
+ - `anaconda` specifies the package_spec as anaconda to install Anaconda packages in the virtual environment.
+ ```bash sudo /usr/bin/anaconda/bin/conda create --prefix /usr/bin/anaconda/envs/py35new python=3.5 anaconda=4.3 --yes ```
HDInsight cluster depends on the built-in Python environment, both Python 2.7 an
- Use conda channel:
- - `seaborn` is the package name that you would like to install.
- - `-n py35new` specify the virtual environment name that just gets created. Make sure to change the name correspondingly based on your virtual environment creation.
+ - `seaborn` is the package name that you would like to install.
+ - `-n py35new` specify the virtual environment name that just gets created. Make sure to change the name correspondingly based on your virtual environment creation.
- ```bash
- sudo /usr/bin/anaconda/bin/conda install seaborn -n py35new --yes
- ```
+ ```bash
+ sudo /usr/bin/anaconda/bin/conda install seaborn -n py35new --yes
+ ```
- Or use PyPi repo, change `seaborn` and `py35new` correspondingly:
- ```bash
- sudo /usr/bin/anaconda/envs/py35new/bin/pip install seaborn
- ```
- Use below command if you would like to install a library with a specific version:
+ ```bash
+ sudo /usr/bin/anaconda/envs/py35new/bin/pip install seaborn
+ ```
+
+ Use the following command if you would like to install a library with a specific version:
- Use conda channel:
- - `numpy=1.16.1` is the package name and version that you would like to install.
- - `-n py35new` specify the virtual environment name that just gets created. Make sure to change the name correspondingly based on your virtual environment creation.
+ - `numpy=1.16.1` is the package name and version that you would like to install.
+ - `-n py35new` specify the virtual environment name that just gets created. Make sure to change the name correspondingly based on your virtual environment creation.
- ```bash
- sudo /usr/bin/anaconda/bin/conda install numpy=1.16.1 -n py35new --yes
- ```
+ ```bash
+ sudo /usr/bin/anaconda/bin/conda install numpy=1.16.1 -n py35new --yes
+ ```
- Or use PyPi repo, change `numpy==1.16.1` and `py35new` correspondingly:
- ```bash
- sudo /usr/bin/anaconda/envs/py35new/bin/pip install numpy==1.16.1
- ```
+ ```bash
+ sudo /usr/bin/anaconda/envs/py35new/bin/pip install numpy==1.16.1
+ ```
- if you don't know the virtual environment name, you can SSH to the head node of the cluster and run `/usr/bin/anaconda/bin/conda info -e` to show all virtual environments.
+ If you don't know the virtual environment name, you can SSH to the head node of the cluster and run `/usr/bin/anaconda/bin/conda info -e` to show all virtual environments.
3. Change Spark and Livy configs and point to the created virtual environment.
HDInsight cluster depends on the built-in Python environment, both Python 2.7 an
If you are using livy, add the following properties to the request body: ```
- ΓÇ£confΓÇ¥ : {
- ΓÇ£spark.yarn.appMasterEnv.PYSPARK_PYTHONΓÇ¥:ΓÇ¥/usr/bin/anaconda/envs/py35/bin/pythonΓÇ¥,
- ΓÇ£spark.yarn.appMasterEnv.PYSPARK_DRIVER_PYTHONΓÇ¥:ΓÇ¥/usr/bin/anaconda/envs/py35/bin/pythonΓÇ¥
+ "conf" : {
+ "spark.yarn.appMasterEnv.PYSPARK_PYTHON":"/usr/bin/anaconda/envs/py35/bin/python",
+ "spark.yarn.appMasterEnv.PYSPARK_DRIVER_PYTHON":"/usr/bin/anaconda/envs/py35/bin/python"
} ```
HDInsight cluster depends on the built-in Python environment, both Python 2.7 an
:::image type="content" source="./media/apache-spark-python-package-installation/check-python-version-in-jupyter.png" alt-text="Check Python version in Jupyter Notebook" border="true"::: - ## Next steps * [Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md)
iot-central Quick Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-export-data.md
Before you can export data from your IoT Central application, you need an Azure
Run the following script in the Azure Cloud Shell. Replace the `clustername` value with a unique name for your cluster before you run the script. The cluster name can contain only lowercase letters and numbers: > [!IMPORTANT]
-> The script takes at least 10 minutes to run.
+> The script can take 20 to 30 minutes to run.
```azurecli # The cluster name can contain only lowercase letters and numbers.
az kusto cluster create --cluster-name $clustername \
--enable-auto-stop=true \ --resource-group $resourcegroup --location $location
-# Crete a database in the cluster
+# Create a database in the cluster
az kusto database create --cluster-name $clustername \ --database-name $databasename \ --read-write-database location=$location soft-delete-period=P365D hot-cache-period=P31D \
iot-develop Concepts Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-digital-twin.md
The example IoT Plug and Play device in this article implements a [Temperature C
## Device twins and digital twins
-As well as a digital twin, Azure IoT Hub also maintains a *device twin* for every connected device. A device twin is similar to a digital twin in that it's a representation of a device's properties. The Azure IoT service SDKs include APIs for interacting with device twins.
+Along with a digital twin, Azure IoT Hub also maintains a *device twin* for every connected device. A device twin is similar to a digital twin in that it's a representation of a device's properties. An IoT hub initializes a digital twin and a device twin the first time an IoT Plug and Play device is provisioned. The Azure IoT service SDKs include APIs for interacting with device twins.
-An IoT hub initializes a digital twin and a device twin the first time an IoT Plug and Play device connects.
+Device twins are JSON documents that store device state information, including metadata, configurations, and conditions. To learn more, see [IoT Hub service client examples](concepts-developer-guide-service.md#iot-hub-service-client-examples). Device and solution builders can both use the same set of device twin APIs and SDKs to implement devices and solutions using IoT Plug and Play conventions. In a device twin, the state of a writable property is split across the *desired properties* and *reported properties* sections. All read-only properties are available within the reported properties section.
-Device twins are JSON documents that store device state information including metadata, configurations, and conditions. To learn more, see [IoT Hub service client examples](concepts-developer-guide-service.md#iot-hub-service-client-examples). Both device and solution builders can continue to use the same set of Device Twin APIs and SDKs to implement devices and solutions using IoT Plug and Play conventions.
-
-The digital twin APIs operate on high-level DTDL constructs such as components, properties, and commands. The digital twin APIs make it easier for solution builders to create IoT Plug and Play solutions.
-
-In a device twin, the state of a writable property is split across the *desired properties* and *reported properties* sections. All read-only properties are available within the reported properties section.
-
-In a digital twin, there's a unified view of the current and desired state of the property. The synchronization state for a given property is stored in the corresponding default component `$metadata` section.
+The digital twin APIs operate on high-level DTDL constructs such as components, properties, and commands and makes it easier for solution builders to create IoT Plug and Play solutions. In a digital twin, there's a unified view of the current and desired state of the property. The synchronization state for a given property is stored in the corresponding default component `$metadata` section.
### Device twin JSON example
Now that you've learned about digital twins, here are some additional resources:
- [How to use IoT Plug and Play digital twin APIs](howto-manage-digital-twin.md) - [Interact with a device from your solution](tutorial-service.md) - [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin)-- [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md)
+- [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md)
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/overview-iot-plug-and-play.md
IoT Plug and Play enables solution builders to integrate IoT devices with their
You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development.
-To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and the industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
+To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same.
IoT Plug and Play is useful for two types of developers:
As a solution builder, you can use [IoT Central](../iot-central/core/overview-iot-central.md) or [IoT Hub](../iot-hub/about-iot-hub.md) to develop a cloud-hosted IoT solution that uses IoT Plug and Play devices.
-The web UI in IoT Central lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. IoT Plug and Play devices connect directly to an IoT Central application where you can use customizable dashboards to monitor and control your devices. You can also use device templates in the IoT Central web UI to create and edit DTDL models.
+The web UI in IoT Central lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. IoT Plug and Play devices connect directly to an IoT Central application. Here you can use customizable dashboards to monitor and control your devices. You can also use device templates in the IoT Central web UI to create and edit DTDL models.
IoT Hub - a managed cloud service - acts as a message hub for secure, bi-directional communication between your IoT application and your devices. When you connect an IoT Plug and Play device to an IoT hub, you can use the [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md) tool to view the telemetry, properties, and commands defined in the DTDL model.
As a device builder, you can develop an IoT hardware product that supports IoT P
1. Define the device model. You author a set of JSON files that define your device's capabilities using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). A model describes a complete entity such as a physical product, and defines the set of interfaces implemented by that entity. Interfaces are shared contracts that uniquely identify the telemetry, properties, and commands supported by a device. Interfaces can be reused across different models.
-1. Author device software or firmware in a way that their telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). If you are connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step.
+1. You should create device software or firmware in a way that your telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). If you're connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step.
1. The device announces the model ID as part of the MQTT connection. The Azure IoT SDK includes new constructs to provide the model ID at connection time.
iot-develop Quickstart Devkit Espressif Esp32 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-espressif-esp32.md
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
-In this quickstart, you use the Azure FreeRTOS middleware to connect the ESPRESSIF ESP32-Azure IoT Kit (hereafter, the ESP32 DevKit) to Azure IoT.
+In this quickstart, you use the Azure FreeRTOS middleware to connect the ESPRESSIF ESP32-Azure IoT Kit (from now on, the ESP32 DevKit) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming an ESP32 DevKit * Build an image and flash it onto the ESP32 DevKit
Hardware:
## Prepare the development environment
-To set up your development environment, first you install the ESPRESSIF ESP-IDF build environment. The installer includes all the tools required to clone, build, flash and monitor your device.
+To set up your development environment, first you install the ESPRESSIF ESP-IDF build environment. The installer includes all the tools required to clone, build, flash, and monitor your device.
To install the ESP-IDF tools: 1. Download and launch the [ESP-IDF Online installer](https://dl.espressif.com/dl/esp-idf).
To save the configuration:
### Build and flash the image
-In this section, you use the ESP-IDF tools to build, flash and monitor the ESP32 DevKit as it connects to Azure IoT.
+In this section, you use the ESP-IDF tools to build, flash, and monitor the ESP32 DevKit as it connects to Azure IoT.
> [!NOTE] > In the following commands in this section, use a short build output path near your root directory. Specify the build path after the `-B` parameter in each command that requires it. The short path helps to avoid a current issue in the ESPRESSIF ESP-IDF tools that can cause errors with long build path names. The following commands use a local path *C:\espbuild* as an example.
To build the image:
idf.py --no-ccache -B "C:\espbuild" build ```
-1. After the build completes, confirm that the binary image file was created in the build path you specified previously.
+1. After the build completes, confirm that the binary image file was created in the build path that you specified previously.
*C:\espbuild\azure_iot_freertos_esp32.bin*
To view telemetry in IoT Central:
1. Select the device from the device list. 1. Select the **Overview** tab on the device page, and view the telemetry as the device sends messages to the cloud.
- :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-telemetry.png" alt-text="Screenshot of ESP32 DevKit device sending telemetry to IoT Central.":::
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-telemetry.png" alt-text="Screenshot of the ESP32 DevKit device sending telemetry to IoT Central.":::
## Send a command to the device
To remove the entire Azure IoT Central sample application and all its devices an
## Next Steps
-In this quickstart you built a custom image that contains the Azure FreeRTOS middleware sample code, and then flashed the image to the ESP32 DevKit device. You also used the IoT Central portal to create Azure resources, connect the ESP32 DevKit securely to Azure, view telemetry, and send messages.
+In this quickstart, you built a custom image that contains the Azure FreeRTOS middleware sample code, and then you flashed the image to the ESP32 DevKit device. You also used the IoT Central portal to create Azure resources, connect the ESP32 DevKit securely to Azure, view telemetry, and send messages.
As a next step, explore the following articles to learn more about working with embedded devices and connecting them to Azure IoT.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/samples/) :::zone-end
-In this quickstart, you use Azure RTOS to connect the Microchip ATSAME54-XPro (hereafter, the Microchip E54) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect the Microchip ATSAME54-XPro (from now on, the Microchip E54) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming a Microchip E54 in C * Build an image and flash it onto the Microchip E54
To connect the Microchip E54 to Azure, you'll modify a configuration file for Az
### Optional: Install a weather sensor
-If you have the Weather Click sensor and the mikroBUS Xplained Pro adapter, follow the steps in this section; otherwise, skip to [Build the image](#build-the-image). You can complete this quickstart even if you don't have a sensor. The sample code for the device returns simulated data if a real sensor is not present.
+If you have the Weather Click sensor and the mikroBUS Xplained Pro adapter, follow the steps in this section; otherwise, skip to [Build the image](#build-the-image). You can complete this quickstart even if you don't have a sensor. The sample code for the device returns simulated data if a real sensor isn't present.
1. If you have the Weather Click sensor and the mikroBUS Xplained Pro adapter, install them on the Microchip E54 as shown in the following photo:
Termite is now ready to receive output from the Microchip E54.
1. Save the file.
-1. Select **Project > Batch Build**. Then select **build_all** and **Make** to build all projects. You will see build output in the **Build** pane. Confirm the successful compilation and linking of all sample projects.
+1. Select **Project > Batch Build**. Then select **build_all** and **Make** to build all projects. You'll see build output in the **Build** pane. Confirm the successful compilation and linking of all sample projects.
1. Select the green **Download and Debug** button in the toolbar to download the program.
Termite is now ready to receive output from the Microchip E54.
1. Save the file.
-1. Before you can build the sample, you must build the **sample_azure_iot_embedded_pnp** project's dependent libraries: **threadx**, **netxduo**, and **same54_lib**. To build each library right-click its project in the **Projects** pane and select **Build**. Wait for each build to complete before moving to the next library.
+1. Before you can build the sample, you must build the **sample_azure_iot_embedded_pnp** project's dependent libraries: **threadx**, **netxduo**, and **same54_lib**. To build each library, right-click its project in the **Projects** pane and select **Build**. Wait for each build to complete before moving to the next library.
1. After all prerequisite libraries have been successfully built, right-click the **sample_azure_iot_embedded_pnp** project and select **Build**.
To view telemetry in IoT Central portal:
## Call a direct method on the device
-You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
To call a method in IoT Central portal: :::zone pivot="iot-toolset-cmake"
If you experience issues building the device code, flashing the device, or conne
For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). :::zone-end :::zone pivot="iot-toolset-iar-ewarm"
-For help debugging the application, see the selections under **Help** in **IAR EW for ARM**.
+For help with debugging the application, see the selections under **Help** in **IAR EW for ARM**.
:::zone-end :::zone pivot="iot-toolset-mplab"
-For help debugging the application, see the selections under **Help** in **MPLAB X IDE**.
+For help with debugging the application, see the selections under **Help** in **MPLAB X IDE**.
:::zone-end ## Clean up resources
To remove the entire Azure IoT Central sample application and all its devices an
## Next steps
-In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Microchip E54 device device. You also used the IoT Central portal to create Azure resources, connect the Microchip E54 securely to Azure, view telemetry, and send messages.
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Microchip E54 device. You also used the IoT Central portal to create Azure resources, connect the Microchip E54 securely to Azure, view telemetry, and send messages.
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
-In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (from now on, MXCHIP DevKit) to Azure IoT.
You'll also use IoT Explorer and IoT Plug and Play to manage the MXCHIP DevKit. IoT Plug and Play provides an open device model that lets applications programmatically query a device's capabilities and interact with it. A device uses this model to broadcast its capabilities to an IoT Plug and Play-enabled application. By using this model, you can streamline and enhance the tasks of adding, configuring, and managing devices. For more information, see the [IoT Plug and Play documentation](../iot-develop/index.yml).
To add the public model repository:
### Register a device
-In this section, you create a new device instance and register it with the IoT hub you created. You will use the connection information for the newly registered device to securely connect your physical device in a later section.
+In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section.
To register a device:
Keep Termite open to monitor device output in the following steps.
## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In this section and the following sections, you'll use the Plug and Play capabilities surfaced in IoT Explorer to manage and interact with the MXCHIP DevKit. These capabilities rely on the device model published for the MXCHIP DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting the same action from the left side menu of your device pane in IoT Explorer; however, using plug and play often provides an enhanced experience. This is because IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In this section and the following sections, you'll use the Plug and Play capabilities that surfaced in IoT Explorer to manage and interact with the MXCHIP DevKit. These capabilities rely on the device model published for the MXCHIP DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting the same action from the left side menu of your device pane in IoT Explorer. However, using plug and play often provides an enhanced experience. This is because IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To use Azure CLI to view device properties:
## View telemetry
-With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can perform the same task using Azure CLI.
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
To view telemetry in Azure IoT Explorer:
To use Azure CLI to view device telemetry:
## Call a direct method on the device
-You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can perform the same task using Azure CLI.
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
To call a method in Azure IoT Explorer:
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
-In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (from now on, MXCHIP DevKit) to Azure IoT.
You'll complete the following tasks:
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
zone_pivot_groups: iot-develop-nxp-toolset
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/samples/) :::zone-end
-In this quickstart, you use Azure RTOS to connect the NXP MIMXRT1060-EVK Evaluation kit (hereafter, the NXP EVK) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect the NXP MIMXRT1060-EVK Evaluation kit (from now on, the NXP EVK) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming an NXP EVK in C * Build an image and flash it onto the NXP EVK
Keep Termite open to monitor device output in the following steps.
## Prepare the device
-In this section you use IAR EW IDE to modify a configuration file for Azure IoT settings, build the sample client application, then download and run it on the device.
+In this section, you use IAR EW IDE to modify a configuration file for Azure IoT settings, build the sample client application, download and then run it on the device.
### Connect the device
In this section you use IAR EW IDE to modify a configuration file for Azure IoT
1. Save the file.
-1. Select **Project > Batch Build**. Then select **build_all** and **Make** to build all projects. You will see build output in the **Build** pane. Confirm the successful compilation and linking of all sample projects.
+1. Select **Project > Batch Build**. Then select **build_all** and **Make** to build all projects. You'll see build output in the **Build** pane. Confirm the successful compilation and linking of all sample projects.
1. Select the green **Download and Debug** button in the toolbar to download the program.
Keep the terminal open to monitor device output in the following steps.
## Prepare the environment
-In this section you prepare your environment, and use MCUXpresso to build and run the sample application on the device.
+In this section, you prepare your environment, and use MCUXpresso to build and run the sample application on the device.
### Install the device SDK
To call a method in IoT Central portal:
:::zone pivot="iot-toolset-cmake" 1. Select the **Command** tab from the device page.
-1. In the **State** dropdown, select **True**, and then select **Run**. There will be no change on the device as there isn't an available LED to toggle; however, you can view the output in Termite to monitor the status of the methods.
+1. In the **State** dropdown, select **True**, and then select **Run**. There will be no change on the device as there isn't an available LED to toggle. However, you can view the output in Termite to monitor the status of the methods.
:::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/iot-central-invoke-method.png" alt-text="Screenshot of calling a direct method on a device in IoT Central.":::
If you experience issues building the device code, flashing the device, or conne
For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). :::zone-end :::zone pivot="iot-toolset-iar-ewarm"
-For help debugging the application, see the selections under **Help** in **IAR EW for ARM**.
+If you need help debugging the application, see the selections under **Help** in **IAR EW for ARM**.
:::zone-end :::zone pivot="iot-toolset-iar-ewarm"
-For help debugging the application, in MCUXpresso open the **Help > MCUXPresso IDE User Guide** and see the content on Azure RTOS debugging.
+If you need help debugging the application, in MCUXpresso open the **Help > MCUXPresso IDE User Guide** and see the content on Azure RTOS debugging.
:::zone-end ## Clean up resources
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/Renesas/RX65N_Cloud_Kit)
-In this quickstart, you use Azure RTOS to connect the Renesas RX65N Cloud Kit (hereafter, the Renesas RX65N) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect the Renesas RX65N Cloud Kit (from now on, the Renesas RX65N) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming a Renesas RX65N in C * Build an image and flash it onto the Renesas RX65N
You will complete the following tasks:
* Hardware * The [Renesas RX65N Cloud Kit](https://www.renesas.com/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-cloud-kit-renesas-rx65n-cloud-kit) (Renesas RX65N)
- * 2 USB 2.0 A male to Mini USB male cables
+ * two USB 2.0 A male to Mini USB male cables
* WiFi 2.4 GHz ## Prepare the development environment
To install the tools:
*%USERPROFILE%\AppData\Roaming\GCC for Renesas RX 8.3.0.202004-GNURX-ELF\rx-elf\rx-elf\bin* 1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
-1. Run the following commands to confirm that CMake version 3.14 or later is installed and the RX compiler path is set up correctly.
+1. Run the following commands to confirm that CMake version 3.14 or later is installed. Make certain that the RX compiler path is set up correctly.
```shell cmake --version
load-balancer Load Balancer Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
When you use the virtual machine scale set in the back-end pool of the load bala
## Virtual Machine Scale Set Instance-level IPs
-When virtual machine scale sets with [public IPs per instance](../virtual-network/ip-services/public-ip-address-prefix.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). Note that when using a Standard Load Balancer, the individual instance IPs are all of type Standard "no-zone" (though the Load Balancer frontend could be zonal or zone-redundant).
+When virtual machine scale sets with [public IPs per instance](https://docs.microsoft.com/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). Note that when using a Standard Load Balancer, the individual instance IPs are all of type Standard "no-zone" (though the Load Balancer frontend could be zonal or zone-redundant).
## Outbound rules
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
$lbip = @{
} $feip = New-AzLoadBalancerFrontendIpConfig @lbip
-## Create load balancer frontend configuration and place in variable. ##
-$feip = New-AzLoadBalancerFrontendIpConfig -Name 'myFrontEnd' -PublicIpAddress $publicIp
- ## Create backend address pool configuration and place in variable. ## $bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
load-testing How To Update Rerun Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-update-rerun-test.md
In this article, you'll learn how to configure your load test to monitor server-side application metrics by using Azure Load Testing Preview.
-Azure Load Testing integrates with Azure Monitor to capture server-side resource metrics for Azure-hosted applications. You can specify which Azure components and resource metrics to monitor for your load test run.
+Azure Load Testing integrates with Azure Monitor to capture server-side resource metrics for Azure-hosted applications. You can specify which [Azure components](./resource-supported-azure-resource-types.md) and resource metrics to monitor for your load test run.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure Load Testing integrates with Azure Monitor to capture server-side resource
## Configure server-side monitoring for a load test
-In this section, you'll update an existing load test to configure the Azure application components to capture server-side resource metrics.
+In this section, you'll update an existing load test to configure the Azure application components to capture server-side resource metrics. When the load test finishes, you can view the server-side metrics in the dashboard, or [compare metrics across multiple test runs](./how-to-compare-multiple-test-runs.md).
+
+For the list of Azure components that Azure Load Testing supports, see [Supported Azure resource types](./resource-supported-azure-resource-types.md).
1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
In this section, you'll update an existing load test to configure the Azure appl
- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md). - To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).+
+- To learn how to identify performance regressions across test runs, see [Compare multiple test runs](./how-to-compare-multiple-test-runs.md).
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/resource-supported-azure-resource-types.md
+
+ Title: Supported Azure resource types
+
+description: 'Learn which Azure resource types are supported for server-side monitoring in Azure Load Testing. You can select specific metrics to be monitored during a load test.'
+++++ Last updated : 01/04/2022++
+# Supported Azure resource types for monitoring in Azure Load Testing Preview
+
+Learn which Azure resource types Azure Load Testing Preview supports for server-side monitoring. You can select specific metrics for each resource type to track and report on for a load test.
+
+To learn how to configure your load test, see [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Supported Azure resource types
+
+This section lists the Azure resource types that Azure Load Testing supports for server-side monitoring.
+
+* API Management
+* App Service
+* App Service plan
+* Application Insights
+* Azure Cache for Redis
+* Azure Cosmos DB
+* Azure Database for MariaDB server
+* Azure Database for MySQL server
+* Azure Database for PostgreSQL server
+* Azure Functions function app
+* Azure Kubernetes Service (AKS)
+* Azure SQL Database
+* Azure SQL elastic pool
+* Azure SQL Managed Instance
+* Event Hubs cluster
+* Event Hubs namespace
+* Key Vault
+* Service Bus
+* Static Web Apps
+* Storage Accounts: Azure Blog Storage/Azure Files/Azure Table Storage/Queue Storage
+* Storage Accounts (classic): Azure Files/Azure Table Storage/Queue Storage
+* Traffic Manager profile
+* Virtual Machine Scale Sets
+* Virtual Machines
+
+## Next steps
+
+* Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+* Learn how to [Get more insights from App Service diagnostics](./how-to-appservice-insights.md).
+* Learn how to [Compare multiple test runs](./how-to-compare-multiple-test-runs.md).
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
If you don't permit access for these dependencies, your ISE deployment fails and
* [Azure IP addresses for connectors in the ISE region, available in this download file](https://www.microsoft.com/download/details.aspx?id=56519) * [App Service Environment management addresses](../app-service/environment/management-addresses.md) * [Azure Traffic Manager management addresses](https://azuretrafficmanagerdata.blob.core.windows.net/probes/azure/probe-ip-ranges.json)
- * [Azure API Management Control Plane IP addresses](../api-management/api-management-using-with-vnet.md#control-plane-ip-addresses)
+ * [Azure API Management Control Plane IP addresses](../api-management/virtual-network-reference.md#control-plane-ip-addresses)
* Service endpoints
If you don't permit access for these dependencies, your ISE deployment fails and
* [Azure App Service Dependencies](../app-service/environment/firewall-integration.md#deploying-your-ase-behind-a-firewall) * [Azure Cache Service Dependencies](../azure-cache-for-redis/cache-how-to-premium-vnet.md#what-are-some-common-misconfiguration-issues-with-azure-cache-for-redis-and-virtual-networks)
- * [Azure API Management Dependencies](../api-management/api-management-using-with-vnet.md#network-configuration-issues)
+ * [Azure API Management Dependencies](../api-management/virtual-network-reference.md)
<a name="create-environment"></a>
marketplace Co Sell Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-configure.md
Previously updated : 12/03/2021 Last updated : 1/04/2021 # Configure co-sell for a commercial marketplace offer
The supported file types are .pdf, .ppt, .pptx, .doc, .docx, .xls, .xlsx, .jpg,
| **Documents** | **Description** | | :- | :-|
-| *Solution/offer one-pager (Required)* | Drive awareness among potential customers with a professionally designed one-pager that showcases the value proposition of your solution.<br><br>You can use one of the relevant templates to provide a customer-ready description of your offering:<br><ul><li> [Microsoft Azure one-pager template](https://aka.ms/Customer-One-Pager_MicrosoftAzure)</li><li>[Microsoft Dynamics 365 one-pager template](https://aka.ms/Customer-One-Pager_MicrosoftDynamics365)</li> <li>[Microsoft 365 one-pager template](https://aka.ms/Customer-One-Pager_MicrosoftOffice365) </li><li>[Windows 10 one-pager template](https://aka.ms/Customer-One-Pager_Windows)</li></ul> <br> Microsoft sales teams may share this information with customers to help determine if your offering may be a good fit, and to ensure that it is customer ready. |
-| *Solution/offer pitch deck (Required)* | You can use the [Customer presentation template](https://aka.ms/GTMServices_CustomerPresentation) to create your pitch deck. This deck should reference the [Reference architecture diagram](reference-architecture-diagram.md). The purpose of this slide deck is to pitch your offer and its value proposition. After ensuring that your offer is customer ready, Microsoft sales teams may share this presentation with customers to articulate the value that your company and Microsoft bring when deploying a joint solution. The presentation should cover what your offer does, how it can help customers, what industries the offer is relevant for, and how it compares with competing solutions. |
-| *Customer case study* (Optional)| Use the [Case study template](https://aka.ms/GTM_Case_Study_Template) to create your customer case study. This information shows a potential customer how you and Microsoft have successfully deployed your offer in prior cases. |
+| *Solution/offer one-pager (Required)* | Drive awareness among potential customers with a professionally designed one-pager that showcases the value proposition of your solution.<br><br>You can use one of the relevant templates to provide a customer-ready description of your offering:<br><ul><li> [Microsoft Azure one-pager template](https://go.microsoft.com/fwlink/?linkid=2171711)</li><li>[Microsoft Dynamics 365 one-pager template](https://go.microsoft.com/fwlink/?linkid=2171609)</li> <li>[Microsoft 365 one-pager template](https://go.microsoft.com/fwlink/?linkid=2171408) </li><li>[Windows 10 one-pager template](https://go.microsoft.com/fwlink/?linkid=2171550)</li></ul><br>Microsoft sales teams may share this information with customers to help determine if your offering may be a good fit, and to ensure that it is customer ready. |
+| *Solution/offer pitch deck (Required)* | You can use the [Customer presentation template](https://go.microsoft.com/fwlink/?linkid=2171712) to create your pitch deck. This deck should reference the [Reference architecture diagram](reference-architecture-diagram.md). The purpose of this slide deck is to pitch your offer and its value proposition. After ensuring that your offer is customer ready, Microsoft sales teams may share this presentation with customers to articulate the value that your company and Microsoft bring when deploying a joint solution. The presentation should cover what your offer does, how it can help customers, what industries the offer is relevant for, and how it compares with competing solutions. |
+| *Customer case study* (Optional)| Use the [Case study template](https://go.microsoft.com/fwlink/?linkid=2171611) to create your customer case study. This information shows a potential customer how you and Microsoft have successfully deployed your offer in prior cases. |
| *Verifiable customer wins* (Optional) | Provide specific examples of customer successes after your offer has been deployed. | | *Channel pitch deck* (Optional) | A slide deck with information that helps channel resellers learn more about your offer and get their sales teams ready to sell it. This deck typically includes an elevator pitch, information about target customers, questions to ask customers, talking points, and links to videos, documentation, and support information. | | *Reference architecture diagram* (Required for Azure IP co-sell incentive status) | A diagram that represents your offer and its relationship with Microsoft cloud services. It may also demonstrate how your offer meets the technical requirements for Azure IP Co-sell incentive status. [Learn more about the reference architecture diagram.](reference-architecture-diagram.md) |
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-your-marketplace-benefits.md
Your benefits are differentiated based on whether your offer is [List, Trial, or
Based on your eligibility, you will be contacted by a member of the Rewards team when your offer goes live, based on your eligibility.
-List and trial offers receive one-time use benefits. Transact offers are eligible for evergreen benefit engagement. As transacting partners, as you grow your billed sales through the commercial marketplace, you unlock greater benefits per per billed sales (or seats sold) tier.
+List and trial offers receive one-time use benefits. Transact offers are eligible for evergreen benefit engagement. As transacting partners, as you grow your billed sales through the commercial marketplace, you unlock greater benefits per billed sales (or seats sold) tier.
The minimum requirement to publish in the online stores is an MPNID, so these benefits are available to all partners regardless of MPN competency status or partner type. Every partner is empowered to grow your business through the commercial marketplace as a platform.
media-services Configure Connect Java Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/configure-connect-java-howto.md
When you run the command, the `pom.xml`, `App.java`, and other files are created
1. Under the package statement, add these import statements: ```java
- import com.microsoft.azure.AzureEnvironment;
- import com.microsoft.azure.credentials.ApplicationTokenCredentials;
- import com.microsoft.azure.management.mediaservices.v2018_07_01.implementation.MediaManager;
- import com.microsoft.rest.LogLevel;
+ import com.azure.core.management.AzureEnvironment;
+ import com.azure.core.management.profile.AzureProfile;
+ import com.azure.identity.ClientSecretCredential;
+ import com.azure.identity.ClientSecretCredentialBuilder;
+ import com.azure.resourcemanager.mediaservices.MediaServicesManager;
``` 1. To create the Active Directory credentials that you need to make requests, add following code to the main method of the App class and set the values that you got from [Access APIs](./access-api-howto.md): ```java
- final String clientId = "00000000-0000-0000-0000-000000000000";
- final String tenantId = "00000000-0000-0000-0000-000000000000";
- final String clientSecret = "00000000-0000-0000-0000-000000000000";
- final String subscriptionId = "00000000-0000-0000-0000-000000000000";
- try {
- ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(clientId, tenantId, clientSecret, AzureEnvironment.AZURE);
- credentials.withDefaultSubscriptionId(subscriptionId);
-
- MediaManager manager = MediaManager
- .configure()
- .withLogLevel(LogLevel.BODY_AND_HEADERS)
- .authenticate(credentials, credentials.defaultSubscriptionId());
- System.out.println("signed in");
+ AzureProfile azureProfile = new AzureProfile("<YOUR_TENANT_ID>", "<YOUR_SUBSCRIPTION_ID>", AzureEnvironment.AZURE);
+ ClientSecretCredential clientSecretCredential = new ClientSecretCredentialBuilder()
+ .clientId("<YOUR_CLIENT_ID>")
+ .clientSecret("<YOUR_CLIENT_SECRET>")
+ .tenantId("<YOUR_TENANT_ID>")
+ // authority host is optional
+ .authorityHost("<AZURE_AUTHORITY_HOST>")
+ .build();
+ MediaServicesManager mediaServicesManager = MediaServicesManager.authenticate(clientSecretCredential, azureProfile);
+ System.out.println("Hello Azure");
} catch (Exception e) { System.out.println("Exception encountered.");
When you run the command, the `pom.xml`, `App.java`, and other files are created
- [Media Services concepts](concepts-overview.md) - [Java SDK](https://aka.ms/ams-v3-java-sdk) - [Java reference](/java/api/overview/azure/mediaservices/management)-- [com.microsoft.azure.mediaservices.v2018_07_01:azure-mgmt-media](https://search.maven.org/artifact/com.microsoft.azure.mediaservices.v2018_07_01/azure-mgmt-media/1.0.0-beta/jar)
+- [com.azure.resourcemanager.mediaservices](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager-mediaservices)
## Next steps
-You can now include `import com.microsoft.azure.management.mediaservices.v2018_07_01.*;` and start manipulating entities.
+You can now include `import com.azure.resourcemanager.mediaservices.*` and start manipulating entities.
For more code examples, see the [Java SDK samples](/samples/azure-samples/media-services-v3-java/azure-media-services-v3-samples-using-java/) repo.
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Title: Manage NSG Flow logs - Azure PowerShell
-description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with PowerShell
+description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with Azure PowerShell
-# Configuring Network Security Group Flow logs with PowerShell
+# Configuring Network Security Group Flow logs with Azure PowerShell
> [!div class="op_single_selector"] > - [Azure portal](network-watcher-nsg-flow-logging-portal.md)
openshift Built In Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/built-in-container-registry.md
Last updated 10/15/2020
# Configure built-in container registry for Azure Red Hat OpenShift 4
-Azure Red Hat OpenShift provides an integrated container image registry called [OpenShift Container Registry (OCR)](https://docs.openshift.com/container-platform/4.6/registry/architecture-component-imageregistry.html) that adds the ability to automatically provision new image repositories on demand. This provides users with a built-in location for their application builds to push the resulting images.
+Azure Red Hat OpenShift provides an integrated container image registry called [OpenShift Container Registry (OCR)](https://docs.openshift.com/container-platform/4.5/registry/architecture-component-imageregistry.html) that adds the ability to automatically provision new image repositories on demand. This provides users with a built-in location for their application builds to push the resulting images.
In this article, you'll configure the built-in container image registry for an Azure Red Hat OpenShift (ARO) 4 cluster. You'll learn how to:
echo "Container Registry URL: $HOST"
## Next steps
-Now that you've set up the built-in container image registry, you can get started by deploying an application on OpenShift. For Java applications, check out [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift 4 cluster](howto-deploy-java-liberty-app.md).
+Now that you've set up the built-in container image registry, you can get started by deploying an application on OpenShift. For Java applications, check out [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift 4 cluster](howto-deploy-java-liberty-app.md).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-extensions.md
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
You can now create a TimescaleDB hypertable [from scratch](https://docs.timescale.com/getting-started/creating-hypertables) or migrate [existing time-series data in PostgreSQL](https://docs.timescale.com/getting-started/migrating-data).
-### Restoring a Timescale database
+### Restoring a Timescale database using pg_dump and pg_restore
To restore a Timescale database using pg_dump and pg_restore, you need to run two helper procedures in the destination database: `timescaledb_pre_restore()` and `timescaledb_post restore()`. First prepare the destination database:
Now you can run pg_dump on the original database and then do pg_restore. After t
```SQL SELECT timescaledb_post_restore(); ```
+For more details on restore method wiith Timescae enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
+### Restoring a Timescale database using timescaledb-backup
+
+ While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
+ To do so you should do following
+ 1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup)
+ 2. Create target Azure Database for PostgreSQL server and database
+ 3. Enable Timescale extension as shown above
+ 4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore)
+ 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
+
+ More details on hese utilities can be found [here](https://github.com/timescale/timescaledb-backup).
+> [!NOTE]
+> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
+ ## Next steps If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-query-performance-insight.md
For Query Performance Insight to function, data must exist in the [Query Store](
## Viewing performance insights The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
-In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Intelligent Performance** section of the menu bar.
+In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Intelligent Performance** section of the menu bar. Displaying the **Query Text is no longer supported** and will show as empty until itΓÇÖs taken out of Performance Insights. However, the query text can still be viewed by connecting to azure_sys and querying 'query_store.query_texts_view'.
:::image type="content" source="./media/concepts-query-performance-insight/query-performance-insight-landing-page.png" alt-text="Query Performance Insight long running queries":::
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-query-store.md
View and manage Query Store using the following views and functions. Anyone in t
Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash. ### query_store.qs_view
-This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID.
+This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'.
|**Name** |**Type** | **References** | **Description**| |||||
This view returns query text data in Query Store. There is one row for each dist
| query_sql_text | Varchar(10000) | Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. | ### query_store.pgms_wait_sampling_view
-This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event.
+This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'.
| **Name** | **Type** | **References** | **Description** | |--|--|--|--|
purview How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-manage-quotas.md
Azure Purview is a cloud service for use by data users. You use Azure Purview to
|Maximum time that a scan can run for|7 days|7 days| |[Data Map Capacity unit (CU)](concept-elastic-data-map.md) |1 CU (25 Operations/second throughput and 2 GB metadata storage) | 100 CU (Contact Support for higher CU)| |Data Map Operations throughput |25 Operations/second for each Capacity Unit | 2,500 Operations/Sec for 100 CU (Contact Support for more throughput)|
-|Data Map Storage |2 GB for each Capacity Unit | 200 GB for for 100 CU (Contact Support for more storage) |
+|Data Map Storage |10 GB for each Capacity Unit | 200 GB for for 100 CU (Contact Support for more storage) |
|Data Map elasticity window | 1 - 8 CU (Data Map can auto scale up/down based on throughput within elasticity window) | Contact support to get higher elasticity window | |Size of assets per account|100M physical assets |Contact Support| |Maximum size of an asset in a catalog|2 MB|2 MB|
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-credentials.md
If you are using the Purview system-assigned managed identity (SAMI) to set up s
- [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration) - [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration)
-## Create Azure Key Vaults connections in your Azure Purview account
-
-Before you can create a Credential, first associate one or more of your existing Azure Key Vault instances with your Azure Purview account.
-
-1. From the [Azure portal](https://portal.azure.com), select your Azure Purview account and open the [Purview Studio](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
-
-2. From the **Credentials** page, select **Manage Key Vault connections**.
-
- :::image type="content" source="media/manage-credentials/manage-kv-connections.png" alt-text="Manage Azure Key Vault connections":::
-
-3. Select **+ New** from the Manage Key Vault connections page.
-
-4. Provide the required information, then select **Create**.
-
-5. Confirm that your Key Vault has been successfully associated with your Azure Purview account as shown in this example:
-
- :::image type="content" source="media/manage-credentials/view-kv-connections.png" alt-text="View Azure Key Vault connections to confirm.":::
- ## Grant Azure Purview access to your Azure Key Vault Currently Azure Key Vault supports two permission models:
Follow these steps only if permission model in your Azure Key Vault resource is
:::image type="content" source="media/manage-credentials/akv-add-rbac.png" alt-text="Azure Key Vault RBAC":::
+## Create Azure Key Vaults connections in your Azure Purview account
+
+Before you can create a Credential, first associate one or more of your existing Azure Key Vault instances with your Azure Purview account.
+
+1. From the [Azure portal](https://portal.azure.com), select your Azure Purview account and open the [Purview Studio](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
+
+2. From the **Credentials** page, select **Manage Key Vault connections**.
+
+ :::image type="content" source="media/manage-credentials/manage-kv-connections.png" alt-text="Manage Azure Key Vault connections":::
+
+3. Select **+ New** from the Manage Key Vault connections page.
+
+4. Provide the required information, then select **Create**.
+
+5. Confirm that your Key Vault has been successfully associated with your Azure Purview account as shown in this example:
+
+ :::image type="content" source="media/manage-credentials/view-kv-connections.png" alt-text="View Azure Key Vault connections to confirm.":::
## Create a new credential
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-classifications.md
Azure Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_expr
Each classification name is prefixed by MICROSOFT.
+> [!Note]
+> Azure Purview can classify both structured (CSV, TSV, JSON, SQL Table etc.) as well as unstructured data (DOC, PDF, TXT etc.). However, there are certain classifications that are only applicable to structured data. Here is the list of classifications that Purview doesn't apply on unstructured data - City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode
++ ## Bloom Filter Classifications ## City, Country, and Place The City, Country, and Place filters have been prepared using best datasets available for preparing the data.
-## Person
+## Person Name
-Person bloom filter has been prepared using the below two datasets.
+Person Name bloom filter has been prepared using the below two datasets.
- [2010 US Census Data for Last Names (162-K entries)](https://www.census.gov/topics/population/genealogy/data/2010_surnames.html) - [Popular Baby Names (from SSN), using all years 1880-2019 (98-K entries)](https://www.ssa.gov/oact/babynames/limits.html)
not applicable
- date of issue - date of expiry
+## Location
+
+### Format
+Longitude can range from -180.0 to 180.0. Latitude can range from -90.0 to 90.0.
+
+### Checksum
+Not applicable
+
+### Keywords
+
+- lat
+- latitude
+- long
+- longitude
+- coord
+- coordinates
+- geo
+- geolocation
+- loc
+- location
+- position
## Luxemburg driver's license number
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/route-server-faq.md
No. We'll add IPv6 support in the future.
If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the VMs in the virtual network. When the VMs send traffic to the destination of this route, the VM hosts will do Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
+### Does Azure Route Server preserve the BGP AS Path of the route it receives?
+
+Yes, Azure Route Server propagates the route with the BGP AS Path intact.
+ ### Does Azure Route Server preserve the BGP communities of the route it receives? Yes, Azure Route Server propagates the route with the BGP communities as is.
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Azure Cognitive Search requires an encrypted channel for all indexer requests ov
The `[MSSQL13.MSSQLSERVER]` part varies based on version and instance name.
- + Set the value of the **Certificate** key to the **thumbprint** of the TLS/SSL certificate you imported to the VM.
+ + Set the value of the **Certificate** key to the **thumbprint** (without spaces) of the TLS/SSL certificate you imported to the VM.
There are several ways to get the thumbprint, some better than others. If you copy it from the **Certificates** snap-in in MMC, you will probably pick up an invisible leading character [as described in this support article](https://support.microsoft.com/kb/2023869/), which results in an error when you attempt a connection. Several workarounds exist for correcting this problem. The easiest is to backspace over and then retype the first character of the thumbprint to remove the leading character in the key value field in regedit. Alternatively, you can use a different tool to copy the thumbprint.
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-private.md
Last updated 08/13/2021
# Make indexer connections through a private endpoint
-> [!NOTE]
-> You can use the [trusted Microsoft service approach](../storage/common/storage-network-security.md#trusted-microsoft-services) to bypass virtual network or IP restrictions on a storage account. You can also enable the search service to access data in the storage account. To do so, see [Indexer access to Azure Storage with the trusted service exception](search-indexer-howto-access-trusted-service-exception.md).
->
-> However, when you use this approach, communication between Azure Cognitive Search and your storage account happens via the public IP address of the storage account, over the secure Microsoft backbone network.
- Many Azure resources, such as Azure storage accounts, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. If you're using an indexer to index data in Azure Cognitive Search, and your data source is on a private network, you can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach the data. This indexer connection method is subject to the following two requirements:
This indexer connection method is subject to the following two requirements:
+ The Azure Cognitive Search service must be on the Basic tier or higher. The feature isn't available on the Free tier. Additionally, if your indexer has a skillset, the tier must be Standard 2 (S2) or higher. For more information, see [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits).
-## Shared Private Link Resources Management APIs
+## Shared private link resources management APIs
-Private endpoints of secured resources that are created through Azure Cognitive Search APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as a storage account, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/).
+Private endpoints of secured resources that are created through Azure Cognitive Search APIs are referred to as *shared private link resources* or *managed outbound private endpoints*. This is because you're "sharing" access to a resource, such as a storage account, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). The shared private link is the mechanism by which Azure Cognitive Search makes the connection to resources in a private network.
Through its Management REST API, Azure Cognitive Search provides a [CreateOrUpdate](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update) operation that you can use to configure access from an Azure Cognitive Search indexer.
You can also query the Azure resources for which outbound private endpoint conne
In the remainder of this article, a mix of Azure portal (or the [Azure CLI](/cli/azure/) if you prefer) and [Postman](https://www.postman.com/) (or any other HTTP client like [curl](https://curl.se/) if you prefer) is used to demonstrate the REST API calls. > [!NOTE]
-> There are Azure Cognitive Search data sources and other configurations that require creating more than one shared private link to work appropriately. Here is a list of the configurations with this requirement and which group IDs are necessary for each:
-> * **Azure Data Lake Storage Gen2 data source** - Create two shared private links: One shared private link with the groupID 'dfs' and another shared private link with the groupID 'blob'.
-> * **Skillset with Knowledge store configured** - One or two shared private links are necessary, depending on the projections set for Knowledge store:
-> * If using blob and/or file projections, create one shared private link with the groupID 'blob'.
-> * If using table projections, create one shared private link with the groupID 'table'.
-> * In case blob/file and also table projections are used, create two shared private links: one with groupID 'blob' and one with groupID 'table'.
-> * **Indexer with cache enabled** - Create two shared private links: One shared private link with the groupID 'table' and another shared private link with the groupID 'blob'.
+> There are Azure Cognitive Search data sources and other configurations that require creating particular shared private link resource(s) to work appropriately. For the full list, see **[Additional configuration requirements](#additional-configuration-requirements)**.
## Set up indexer connection through private endpoint
-Use the following instructions to set up an indexer connection through a private endpoint to a secure Azure resource.
-
-The examples in this article are based on the following assumptions:
-* The name of the search service is _contoso-search_, which exists in the _contoso_ resource group of a subscription with subscription ID _00000000-0000-0000-0000-000000000000_.
-* The resource ID of this search service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search_.
-
-### Step 1: Secure your Azure resource
-
-The steps for restricting access varies by resource. The following scenarios show three of the more common types of resources.
--- Scenario 1: Azure Storage-
- The following is an example of how to configure an Azure storage account firewall. If you select this option and leave the page empty, it means that no traffic from virtual networks is allowed.
+Use the following instructions to set up an indexer connection through a private endpoint to a secure Azure resource.
- ![Screenshot of the "Firewalls and virtual networks" pane for Azure storage, showing the option to allow access to selected networks.](media\search-indexer-howto-secure-access\storage-firewall-noaccess.png)
--- Scenario 2: Azure Key Vault-
- The following is an example of how to configure Azure Key Vault firewall.
-
- ![Screenshot of the "Firewalls and virtual networks" pane for Azure Key Vault, showing the option to allow access to selected networks.](media\search-indexer-howto-secure-access\key-vault-firewall-noaccess.png)
-
-- Scenario 3: Azure Functions-
- No network setting changes are needed for Azure Functions firewalls. Later in the following steps, when you create the shared private endpoint, the Function will automatically only allow access through private link after the creation of a shared private endpoint to the Function.
-
-### Step 2: Create a shared private link resource to the Azure resource
+### Step 1: Create a shared private link resource to the Azure resource
The following section describes how to create a shared private link resource either using the Azure portal or the Azure CLI. #### Option 1: Portal > [!NOTE]
-> The portal only supports creating a shared private endpoint using group ID values that are GA. For MySQL and Azure Functions, use the Azure CLI steps described in option 2, which follows.
+> Azure portal only supports creating a shared private link resource using **Group ID** values that are generally available. For **[MySQL Private Link (Preview)](../mysql/concepts-data-access-security-private-link.md)** and **[Azure Functions Private Link (Preview)](../azure-functions/functions-networking-options.md)**, use the Azure CLI steps described in **Option 2**, which follows.
-To request Azure Cognitive Search to create an outbound private endpoint connection, via the Shared Private Access blade, click on "Add Shared Private Access". On the blade that opens on the right, you can choose to "Connect to an Azure resource in my directory" or "Connect to an Azure resource by resource ID or alias".
+To request Azure Cognitive Search to create an outbound private endpoint connection, via the *Shared Private Access* blade, click on "Add Shared Private Access". On the blade that opens on the right, you can choose to "Connect to an Azure resource in my directory" or "Connect to an Azure resource by resource ID or alias".
-When using the first option (recommended), the blade will help guide you to pick the appropriate Azure resource and will autofill in other properties such as the group ID of the resource and the resource type.
+When using the first option (recommended), the blade will help guide you to pick the appropriate Azure resource and will autofill in other properties such as the **Group ID** of the resource and the resource type.
![Screenshot of the "Add Shared Private Access" pane, showing a guided experience for creating a shared private link resource. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource.png)
-When using the second option, you can enter the Azure resource ID manually and choose the appropriate group ID. The group IDs are listed at the beginning of this article.
+When using the second option, you can enter the Azure resource ID manually and choose the appropriate **Group ID**. The **Group ID**s are listed at the beginning of this article.
![Screenshot of the "Add Shared Private Access" pane, showing the manual experience for creating a shared private link resource. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource-manual.png) #### Option 2: Azure CLI
-Alternatively, you can make the following API call with the [Azure CLI](/cli/azure/). Use the 2020-08-01-preview API version if you're using a group ID that is in preview. For example, group IDs *sites* and *mysqlServer* and in preview and require you to use the preview API.
+Alternatively, you can make the following API call with the [Azure CLI](/cli/azure/). Use the preview or generally available API version (*2020-08-01* or later) if you're using a **Group ID** that is in preview. For example, **Group ID**s *sites* and *mysqlServer* and in preview and require you to use the preview API.
```dotnetcli az rest --method put --uri https://management.azure.com/subscriptions/<search service subscription ID>/resourceGroups/<search service resource group name>/providers/Microsoft.Search/searchServices/<search service name>/sharedPrivateLinkResources/<shared private endpoint name>?api-version=2020-08-01 --body @create-pe.json
A `202 Accepted` response is returned on success. The process of creating an out
+ A private endpoint, allocated with a private IP address in a `"Pending"` state. The private IP address is obtained from the address space that's allocated to the virtual network of the execution environment for the search service-specific private indexer. Upon approval of the private endpoint, any communication from Azure Cognitive Search to the Azure resource originates from the private IP address and a secure private link channel.
-+ A private DNS zone for the type of resource, based on the `groupId`. By deploying this resource, you ensure that any DNS lookup to the private resource utilizes the IP address that's associated with the private endpoint.
++ A private DNS zone for the type of resource, based on the **Group ID**. By deploying this resource, you ensure that any DNS lookup to the private resource utilizes the IP address that's associated with the private endpoint.
-Be sure to specify the correct `groupId` for the type of resource for which you're creating the private endpoint. Any mismatch will result in a non-successful response message.
+Be sure to specify the correct **Group ID** for the type of resource for which you're creating the private endpoint. Any mismatch will result in a non-successful response message.
-### Step 3: Check the status of the private endpoint creation
+### Step 2: Check the status of the private endpoint creation
In this step you'll confirm that the provisioning state of the resource changes to "Succeeded". #### Option 1: Portal > [!NOTE]
-> The provisioning state will be visible in the portal for both GA and group IDs that are in preview.
+> The "Provisioning State" will be visible in the Azure portal for **Group ID** that are both generally available and Preview.
The portal will show you the state of the shared private endpoint. In the following example the status is "Updating".
You can poll for the status by manually querying the `Azure-AsyncOperationHeader
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2020-08-01 ```
-### Step 4: Approve the private endpoint connection
+### Step 3: Approve the private endpoint connection
> [!NOTE]
-> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to the Azure resource you're connecting to. Alternately, you could use the [REST API](/rest/api/storagerp/privateendpointconnections) that's available via the storage resource provider.
+> In this section, you use the Azure portal for the approval flow of a private endpoint to the Azure resource you're connecting to. Alternatively, you could use the **[REST API](/rest/api/storagerp/privateendpointconnections)** that's available via the Storage resource provider.
>
-> Other providers, such as Azure Cosmos DB or Azure SQL Server, offer similar storage resource provider APIs for managing private endpoint connections.
+> Other providers, such as Azure Cosmos DB or Azure SQL Server, offer similar resource provider REST APIs for managing private endpoint connections.
1. In the Azure portal, navigate to the Azure resource that you're connecting to and select the **Networking** tab. Then navigate to the section that lists the private endpoint connections. Following is an example for a storage account. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0
After the private endpoint connection request is approved, traffic is *capable* of flowing through the private endpoint. After the private endpoint is approved, Azure Cognitive Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
-### Step 5: Query the status of the shared private link resource
+### Step 4: Query the status of the shared private link resource
To confirm that the shared private link resource has been updated after approval, revisit the "Shared Private Access" blade of the search service on the Azure portal and check the "Connection State". ![Screenshot of the Azure portal, showing an "Approved" shared private link resource.](media\search-indexer-howto-secure-access\new-shared-private-link-resource-approved.png)
-Alternatively you can also obtain the "Connection state" by using the [GET API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get).
+Alternatively, you can also obtain the "Connection state" by using the [GET API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get).
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe?api-version=2020-08-01
This would return a JSON, where the connection state would show up as "status" u
If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and the indexer can be configured to communicate over the private endpoint.
+### Step 5: Secure your Azure resource
+
+The steps for restricting access varies by resource. The following scenarios show three of the more common types of resources.
+
+- Scenario 1: Azure Storage
+
+ The following is an example of how to configure an Azure storage account firewall. If you select this option and leave the page empty, it means that no traffic from virtual networks is allowed.
+
+ ![Screenshot of the "Firewalls and virtual networks" pane for Azure storage, showing the option to allow access to selected networks.](media\search-indexer-howto-secure-access\storage-firewall-noaccess.png)
+
+- Scenario 2: Azure Key Vault
+
+ The following is an example of how to configure Azure Key Vault firewall.
+
+ ![Screenshot of the "Firewalls and virtual networks" pane for Azure Key Vault, showing the option to allow access to selected networks.](media\search-indexer-howto-secure-access\key-vault-firewall-noaccess.png)
+
+- Scenario 3: Azure Functions
+
+ No network setting changes are needed for Azure Functions firewalls. Later in the following steps, when you create the shared private endpoint, the Function will automatically only allow access through private link after the creation of a shared private endpoint to the Function.
++ ### Step 6: Configure the indexer to run in the private environment > [!NOTE]
-> You can perform this step before the private endpoint connection is approved. Until the private endpoint connection is approved, any indexer that tries to communicate with a secure resource (such as the storage account) will end up in a transient failure state. New indexers will fail to be created. As soon as the private endpoint connection is approved, indexers can access the private storage account.
+> You can perform this step before the private endpoint connection is approved. However, until the private endpoint connection shows as approved, any indexer that tries to communicate with a secure resource (such as the storage account) will end up in a transient failure state and new indexers will fail to be created.
The following steps show how to configure the indexer to run in the private environment using the REST API. You can also set the execution environment using the JSON editor in the portal.
After the indexer is created successfully, it should connect to the Azure resour
> [!NOTE] > If you already have existing indexers, you can update them via the [PUT API](/rest/api/searchservice/create-indexer) by setting the `executionEnvironment` to `private` or using the JSON editor in the portal.
+## Additional configuration requirements
+
+Here is a list of the data sources and configurations that have special conditions for shared private link resources and which **Group ID**s are necessary for each to work appropriately:
+++ **[Azure Data Lake Storage Gen2 data source](search-howto-index-azure-data-lake-storage.md)** - Create two shared private links: One shared private link with the **Group ID** *'dfs'* and another shared private link with the **Group ID** *'blob'*.++ **[Skillset with knowledge store configured](knowledge-store-concept-intro.md)** - One or two shared private links are necessary, depending on the projections set for knowledge store:
+ + If using object or file projections, create one shared private link with the **Group ID** *'blob'*.
+ + If using table projections, create one shared private link with the **Group ID** *'table'*.
+ + If using all of the projections (object, table, and file), create two shared private links: one with **Group ID** *'blob'* and one with **Group ID** *'table'*.
++ **[Indexer with incremental enrichment (Cache enabled)](cognitive-search-incremental-indexing-conceptual.md)** - Create two shared private links: one shared private link with the **Group ID** 'table' and another shared private link with the **Group ID** 'blob'.+ ## Troubleshooting + If your indexer creation fails with an error message such as "Data source credentials are invalid," it means that either the status of the private endpoint connection is not yet *Approved* or the connection is not functional. To remedy the issue:
After the indexer is created successfully, it should connect to the Azure resour
+ If you're viewing your data source's networking page in the Azure portal and you select a private endpoint that you created for your Azure Cognitive Search service to access this data source, you may receive a *No Access* error. This is expected. You can change the status of the connection request via the target service's portal page but to further manage the shared private link resource you need to view the shared private link resource in your search service's network page in the Azure portal.
-[Quotas and limits](search-limits-quotas-capacity.md) determine how many shared private link resources can be created and depend on the SKU of the search service.
+ [Quotas and limits](search-limits-quotas-capacity.md) determine how many shared private link resources can be created and depend on the SKU of the search service.
+++ If you are experiencing errors after you have followed all the steps listed in [Set up indexer connection through private endpoint](#set-up-indexer-connection-through-private-endpoint), check [Additional configuration requirements](#additional-configuration-requirements) in case you are missing a necessary managed outbound private endpoints for your setup. ## Next steps
-Learn more about private endpoints:
+Learn more about private endpoints and other secure connection methods:
+ [Troubleshoot issues with shared private link resources](troubleshoot-shared-private-link-resources.md) + [What are private endpoints?](../private-link/private-endpoint-overview.md)
-+ [DNS configurations needed for private endpoints](../private-link/private-endpoint-dns.md)
++ [DNS configurations needed for private endpoints](../private-link/private-endpoint-dns.md)++ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/troubleshoot-shared-private-link-resources.md
In addition, the specified `groupId` needs to be valid for the specified resourc
### Azure Resource Manager deployment failures
-A search service initiates the request to create a shared private link, but Azure Resource Manager performs the actual work. You can [check the deployment's status](search-indexer-howto-access-private.md#step-3-check-the-status-of-the-private-endpoint-creation) in the portal or by query, and address any errors that might occur.
+A search service initiates the request to create a shared private link, but Azure Resource Manager performs the actual work. You can [check the deployment's status](search-indexer-howto-access-private.md#step-2-check-the-status-of-the-private-endpoint-creation) in the portal or by query, and address any errors that might occur.
Shared private link resources that have failed Azure Resource Manager deployment will show up in [List](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/list-by-service) and [Get](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get) API calls, but will have a "Provisioning State" of `Failed`. Once the reason of the Azure Resource Manager deployment failure has been ascertained, delete the `Failed` resource and re-create it after applying the appropriate resolution from the following table.
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-redis-cache.md
+
+ Title: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise with Service Connector
+description: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise into your application with Service Connector
++++ Last updated : 1/3/2022++
+# Integrate Azure Cache for Redis with Service Connector
+
+This page shows the supported authentication types and client types of Azure Cache for Redis using Service Connector. You might still be able to connect to Azure Cache for Redis in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute service
+
+- Azure App Service
+- Azure Spring Cloud
+
+## Supported Authentication types and client types
+
+| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal |
+| | | | | |
+| .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
+| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | |
+| Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
+| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
+
+## Default environment variable names or application properties
+
+### .NET (StackExchange.Redis)
+
+**Secret/ConnectionString**
+
+| Default environment variable name | Description | Example value |
+| | | |
+| AZURE_REDIS_CONNECTIONSTRING | StackExchange.Redis connection string | `{redis-server}.redis.cache.windows.net:6380,password={redis-key},ssl=True,defaultDatabase=0` |
+
+### Java (Jedis)
+
+**Secret/ConnectionString**
+
+| Default environment variable name | Description | Example value |
+| | | |
+| AZURE_REDIS_CONNECTIONSTRING | Jedis connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+
+### Java - Spring Boot (spring-boot-starter-data-redis)
+
+**Secret/ConnectionString**
+
+| Application properties | Description | Example value |
+| | | |
+| spring.redis.host | Redis host | `{redis-server}.redis.cache.windows.net` |
+| spring.redis.port | Redis port | `6380` |
+| spring.redis.database | Redis database | `0` |
+| spring.redis.password | Redis key | `{redis-key}` |
+| spring.redis.ssl | SSL setting | `true` |
+
+### Node.js (node-redis)
+
+**Secret/ConnectionString**
+
+| Default environment variable name | Description | Example value |
+||||
+| AZURE_REDIS_CONNECTIONSTRING | node-redis connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
++
+### Python (redis-py)
+
+**Secret/ConnectionString**
+
+| Default environment variable name | Description | Example value |
+||||
+| AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+
+### Go (go-redis)
+
+**Secret/ConnectionString**
+
+| Default environment variable name | Description | Example value |
+||||
+| AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+
+## Next steps
+
+Follow the tutorials listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/overview.md
Once a service connection is created. Developers can validate and check connecti
* Azure Storage (Blob, Queue, File and Table storage) * Azure Key Vault * Azure SignalR Service
+* Azure Cache for Redis (Basic, Standard and Premium and Enterprise tiers)
* Apache Kafka on Confluent Cloud ## How to use Service Connector?
spring-cloud Diagnostic Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/diagnostic-services.md
Choose the log category and metric category you want to monitor.
| **ApplicationConsole** | Console log of all customer applications. | | **SystemLogs** | Currently, only [Spring Cloud Config Server](https://cloud.spring.io/spring-cloud-config/reference/html/#_spring_cloud_config_server) logs in this category. | | **IngressLogs** | [Ingress logs](#show-ingress-log-entries-containing-a-specific-host) of all customer's applications, only access logs. |
+| **BuildLogs** | [Build logs](#show-build-log-entries-for-a-specific-app) of all customer's applications for each build stage. |
## Metrics
AppPlatformIngressLogs
| sort by TimeGenerated ```
+### Show build log entries for a specific app
+
+To review log entries for a specific app during the build process, run the following query:
+
+```sql
+AppPlatformBuildLogs
+| where TimeGenerated > ago(1h) and PodName contains "<app-name>"
+| sort by TimeGenerated
+```
+
+### Show build log entries for a specific app in a specific build stage
+
+To review log entries for a specific app in a specific build stage, run the following query. Replace the *`<app-name>`* placeholder with your application name. Replace the *`<build-stage>`* placeholder with one of the following values, which represent the stages of the build process: `prepare`, `detect`, `restore`, `analyze`, `build`, `export`, or `completion`.
+
+```sql
+AppPlatformBuildLogs
+| where TimeGenerated > ago(1h) and PodName contains "<app-name>" and ContainerName == "<build-stage>"
+| sort by TimeGenerated
+```
+ ### Learn more about querying application logs Azure Monitor provides extensive support for querying application logs by using Log Analytics. To learn more about this service, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md). For more information about building queries to analyze your application logs, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
spring-cloud How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-prepare-app-deployment.md
The following table lists the supported Spring Boot and Spring Cloud combination
Spring Boot version | Spring Cloud version |
-2.3.x | Hoxton.SR8+
2.4.x, 2.5.x | 2020.0 aka Ilford +
+2.6.x | 2021.0.0+
> [!NOTE] > - Please upgrade Spring Boot to 2.5.2 or 2.4.8 to address the following CVE report [CVE-2021-22119: Denial-of-Service attack with spring-security-oauth2-client](https://tanzu.vmware.com/security/cve-2021-22119). If you are using Spring Security, please upgrade it to 5.5.1, 5.4.7, 5.3.10 or 5.2.11. > - An issue was identified with Spring Boot 2.4.0 on TLS authentication between apps and Spring Cloud Service Registry, please use 2.4.1 or above. Please refer to [FAQ](./faq.md?pivots=programming-language-java#development) for the workaround if you insist on using 2.4.0.
-### Dependencies for Spring Boot version 2.3
+### Dependencies for Spring Boot version 2.4/2.5/2.6
-For Spring Boot version 2.3 add the following dependencies to the application POM file.
+For Spring Boot version 2.4/2.5 add the following dependencies to the application POM file.
```xml <!-- Spring Boot dependencies --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.3.4.RELEASE</version>
+ <version>2.4.8</version>
</parent> <!-- Spring Cloud dependencies -->
For Spring Boot version 2.3 add the following dependencies to the application PO
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId>
- <version>Hoxton.SR8</version>
+ <version>2020.0.2</version>
<type>pom</type> <scope>import</scope> </dependency>
For Spring Boot version 2.3 add the following dependencies to the application PO
</dependencyManagement> ```
-### Dependencies for Spring Boot version 2.4/2.5
-
-For Spring Boot version 2.4/2.5 add the following dependencies to the application POM file.
+For Spring Boot version 2.6 add the following dependencies to the application POM file.
```xml <!-- Spring Boot dependencies --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.4.8</version>
+ <version>2.6.0</version>
</parent> <!-- Spring Cloud dependencies -->
For Spring Boot version 2.4/2.5 add the following dependencies to the applicatio
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId>
- <version>2020.0.2</version>
+ <version>2021.0.0</version>
<type>pom</type> <scope>import</scope> </dependency>
sql-database Sql Database Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-add-elastic-pool-to-failover-group-cli.md
- Title: CLI example- Failover group - Azure SQL Database elastic pool
-description: Azure CLI example script to create an Azure SQL Database elastic pool, add it to a failover group, and test failover.
-------- Previously updated : 07/16/2019-
-# Use CLI to add an Azure SQL Database elastic pool to a failover group
-
-This Azure CLI script example creates a single database, adds it to an elastic pool, creates a failover group, and tests failover.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh "Add elastic pool to a failover group")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql elastic-pool](/cli/azure/sql/elastic-pool) | Elastic pool commands. |
-| [az sql failover-group ](/cli/azure/sql/failover-group) | Failover group commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
-
-Additional SQL Database Azure CLI script samples can be found in the [Azure SQL Database Azure CLI scripts](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Add Managed Instance To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-add-managed-instance-to-failover-group-cli.md
- Title: CLI example- Failover group - Azure SQL Managed Instance
-description: Azure CLI example script to create an Azure SQL Managed Instance, add it to a failover group, and test failover.
-------- Previously updated : 07/16/2019-
-# Use CLI to add an Azure SQL Managed Instance to a failover group
-
-This Azure CLI example creates two managed instances, adds them to a failover group, and then tests failover from the primary managed instance to the secondary managed instance.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/failover-groups/add-managed-instance-to-failover-group-az-cli.sh "Add managed instance to a failover group")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it. You will need to remove the resource group twice. Removing the resource group the first time will remove the managed instance and virtual clusters but will then fail with the error message `az group delete : Long running operation failed with status 'Conflict'.`. Run the az group delete command a second time to remove any residual resources as well as the resource group.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az network vnet](/cli/azure/network/vnet) | Virtual network commands. |
-| [az network vnet subnet](/cli/azure/network/vnet/subnet) | Virtual network subnet commands. |
-| [az network nsg](/cli/azure/network/nsg) | Network security group commands. |
-| [az network route-table](/cli/azure/network/route-table) | Route table commands. |
-| [az sql mi](/cli/azure/sql/mi) | SQL Managed Instance commands. |
-| [az network public-ip](/cli/azure/network/public-ip) | Network public IP address commands. |
-| [az network vnet-gateway](/cli/azure/network/vnet-gateway) | Virtual Network Gateway commands. |
-| [az sql instance-failover-group](/cli/azure/sql/instance-failover-group) | SQL Managed Instance failover group commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Auditing And Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-auditing-and-threat-detection-cli.md
- Title: CLI example of auditing and Advanced Threat Protection - Azure SQL Database
-description: Azure CLI example script to configure auditing and Advanced Threat Protection in an Azure SQL Database
-------- Previously updated : 02/09/2021-
-# Use CLI to configure SQL Database auditing and Advanced Threat Protection
-
-This Azure CLI script example configures SQL Database auditing and Advanced Threat Protection.
-
-If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-```azurecli-interactive
-#!/bin/bash
-location="East US"
-randomIdentifier=random123
-
-resource="resource-$randomIdentifier"
-server="server-$randomIdentifier"
-database="database-$randomIdentifier"
-storage="storage$randomIdentifier"
-
-notification="changeto@your.email;changeto@your.email"
-
-login="sampleLogin"
-password="samplePassword123!"
-
-echo "Using resource group $resource with login: $login, password: $password..."
-
-echo "Creating $resource..."
-az group create --name $resource --location "$location"
-
-echo "Creating $server in $location..."
-az sql server create --name $server --resource-group $resource --location "$location" --admin-user $login --admin-password $password
-
-echo "Creating $database on $server..."
-az sql db create --name $database --resource-group $resource --server $server --service-objective S0
-
-echo "Creating $storage..."
-az storage account create --name $storage --resource-group $resource --location "$location" --sku Standard_LRS
-
-echo "Setting access policy on $storage..."
-az sql db audit-policy update --resource-group $resource --server $server --name $database --state Enabled --blob-storage-target-state Enabled --storage-account $storage
-
-echo "Setting threat detection policy on $storage..."
-az sql db threat-policy update --email-account-admins Disabled --email-addresses $notification --name $database --resource-group $resource --server $server --state Enabled --storage-account $storage
-```
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql db audit-policy](/cli/azure/sql/db/audit-policy) | Sets the auditing policy for a database. |
-| [az sql db threat-policy](/cli/azure/sql/db/threat-policy) | Sets an Advanced Threat Protection policy on a database. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-backup-database-cli.md
- Title: "Azure CLI: Backup a database in Azure SQL Database"
-description: Azure CLI example script to backup an Azure SQL single database to an Azure storage container
-------- Previously updated : 03/27/2019-
-# Use CLI to backup an Azure SQL single database to an Azure storage container
-
-This Azure CLI example backs up a database in SQL Database to an Azure storage container.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/backup-database/backup-database.sh "Restore SQL Database")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az sql server](/cli/azure/sql/server) | Server commands. |
-| [az sql db](/cli/azure/sql/db) | Database commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-copy-database-to-new-server-cli.md
- Title: "Azure CLI: Copy database in Azure SQL Database to new server"
-description: Azure CLI example script to copy a database in Azure SQL Database to a new server
-------- Previously updated : 03/12/2019-
-# Use CLI to copy a database in Azure SQL Database to a new server
-
-This Azure CLI script example creates a copy of an existing database in a new server.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/copy-database-to-new-server/copy-database-to-new-server.sh "Copy database to new server")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-az group delete --name $targetResource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql db copy](/cli/azure/sql/db#az_sql_db_copy) | Creates a copy of a database that uses the snapshot at the current time. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-create-configure-managed-instance-cli.md
- Title: "Azure CLI: Create a managed instance"
-description: Azure CLI example script to create a managed instance in Azure SQL Managed Instance
-------- Previously updated : 03/25/2019-
-# Use CLI to create an Azure SQL Managed Instance
-
-This Azure CLI script example creates an Azure SQL Managed Instance in a dedicated subnet within a new virtual network. It also configures a route table and a network security group for the virtual network. Once the script has been successfully run, the managed instance can be accessed from within the virtual network or from an on-premises environment. See [Configure Azure VM to connect to an Azure SQL Managed Instance]../../azure-sql/managed-instance/connect-vm-instance-configure.md) and [Configure a point-to-site connection to an Azure SQL Managed Instance from on-premises](../../azure-sql/managed-instance/point-to-site-p2s-configure.md).
-
-> [!IMPORTANT]
-> For limitations, see [supported regions](../../azure-sql/managed-instance/resource-limits.md#supported-regions) and [supported subscription types](../../azure-sql/managed-instance/resource-limits.md#supported-subscription-types).
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/managed-instance/create-managed-instance.sh "Create managed instance")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az network vnet](/cli/azure/network/vnet) | Virtual network commands. |
-| [az network vnet subnet](/cli/azure/network/vnet/subnet) | Virtual network subnet commands. |
-| [az network route-table](/cli/azure/network/route-table) | Network route table commands. |
-| [az sql mi](/cli/azure/sql/mi) | SQL Managed Instance commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Create Managed Instance To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-create-managed-instance-to-failover-group-cli.md
- Title: "Azure CLI: Add managed instance to failover group"
-description: Learn how to create two managed instances, add them to a failover group, and then test the failover.
-------- Previously updated : 07/16/2019-
-# Use CLI to create an Azure SQL Managed Instance to a failover group
-
-This Azure CLI example creates two managed instances, adds them to a failover group, and then tests failover from the primary managed instance to the secondary managed instance.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample scripts
-
-### Sign in to Azure
--
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/failover-groups/add-managed-instance-to-failover-group-az-cli.sh "Add managed instance to a failover group")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it. You will need to remove the resource group twice. Removing the resource group the first time will remove the managed instance and virtual clusters but will then fail with the error message `az group delete : Long running operation failed with status 'Conflict'.`. Run the az group delete command a second time to remove any residual resources as well as the resource group.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az network vnet](/cli/azure/network/vnet) | Virtual network commands. |
-| [az network vnet subnet](/cli/azure/network/vnet/subnet) | Virtual network subnet commands. |
-| [az network nsg](/cli/azure/network/nsg) | Network security group commands. |
-| [az network nsg rule](/cli/azure/network/nsg/rule)| Network security rule commands. |
-| [az network route-table](/cli/azure/network/route-table) | Route table commands. |
-| [az sql mi](/cli/azure/sql/mi) | SQL Managed Instance commands. |
-| [az network public-ip](/cli/azure/network/public-ip) | Network public IP address commands. |
-| [az network vnet-gateway](/cli/azure/network/vnet-gateway) | Virtual Network Gateway commands |
-| [az sql instance-failover-group](/cli/azure/sql/instance-failover-group) | SQL Managed Instance failover group commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-import-from-bacpac-cli.md
- Title: "Azure CLI: Import BACPAC file to database in Azure SQL Database"
-description: Azure CLI example script to import a BACPAC file into a database in Azure SQL Database
-------- Previously updated : 05/24/2019-
-# Use CLI to import a BACPAC file into a database in SQL Database
-
-This Azure CLI script example imports a database from a *.bacpac* file into a database in SQL Database.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/import-from-bacpac/import-from-bacpac.sh "Create SQL Database")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql server](/cli/azure/sql/server) | Server commands. |
-| [az sql db import](/cli/azure/sql/db#az_sql_db_import) | Database import command. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-restore-database-cli.md
- Title: "Azure CLI: Restore a backup"
-description: Azure CLI example script to restore a database in Azure SQL Database to an earlier point in time from automatic backups.
-------- Previously updated : 03/27/2019-
-# Use CLI to restore a single database in Azure SQL Database to an earlier point in time
-
-This Azure CLI example restores a single database in Azure SQL Database to a specific point in time.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/restore-database/restore-database.sh "Restore SQL Database")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql db restore](/cli/azure/sql/db#az_sql_db_restore) | Restore database command. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Setup Geodr And Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-database-cli.md
- Title: CLI example-active geo-replication-single Azure SQL Database
-description: Azure CLI example script to set up active geo-replication for a single database in Azure SQL Database and fail it over.
-------- Previously updated : 03/12/2019-
-# Use CLI to configure active geo-replication for a single database in Azure SQL Database
-
-This Azure CLI script example configures active geo-replication for a single database and fails it over to a secondary replica of the database.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/setup-geodr-and-failover/setup-geodr-and-failover-single-database.sh "Set up active geo-replication for single database")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql db replica](/cli/azure/sql/db/replica) | Database replica commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Database Setup Geodr And Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-cli.md
- Title: "Az CLI: Configure active geo-replication for an elastic pool"
-description: Azure CLI example script to set up active geo-replication for a pooled database in Azure SQL Database and fail it over.
-------- Previously updated : 03/12/2019-
-# Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
-
-This Azure CLI script example configures active geo-replication for a pooled database in Azure SQL Database and fails it over to the secondary replica of the database.
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/setup-geodr-and-failover/setup-geodr-and-failover-elastic-pool.sh "Set up active geo-replication for elastic pool")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-az group delete --name $secondaryResource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql elastic-pool](/cli/azure/sql/elastic-pool) | Elastic pool commands |
-| [az sql db replica](/cli/azure/sql/db/replica) | Database replication commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Sql Managed Instance Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-managed-instance-restore-geo-backup-cli.md
- Title: CLI example Restore Geo-backup - Azure SQL Database
-description: Azure CLI example script to restore an Azure SQL Managed Instance Database from a geo-redundant backup.
-------- Previously updated : 07/03/2019-
-# Use CLI to restore a Managed Instance database to another geo-region
-
-This Azure CLI script example restores an Azure SQL Managed Instance database from a remote geo-region (geo-restore).
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Prerequisites
-
-An existing pair of managed instances, see [Use Azure CLI to create an Azure SQL Managed Instance](sql-database-create-configure-managed-instance-cli.md).
-
-### Sign in to Azure
--
-### Run the script
-
-```azurepowershell-interactive
-#!/bin/bash
-$instance = "<instanceId>" # add instance here
-$targetInstance = "<targetInstanceId>" # add target instance here
-$resource = "<resourceId>" # add resource here
-
-$randomIdentifier = $(Get-Random)
-$managedDatabase = "managedDatabase-$randomIdentifier"
-
-echo "Creating $($managedDatabase) on $($instance)..."
-az sql midb create -g $resource --mi $instance -n $managedDatabase
-
-echo "Restoring $($managedDatabase) to $($targetInstance)..."
-az sql midb restore -g $resource --mi $instance -n $managedDatabase --dest-name $targetInstance --time "2018-05-20T05:34:22"
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Script | Description |
-|||
-| [az sql midb](/cli/azure/sql/midb) | Managed Instance Database commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
sql-database Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
- Title: CLI example- Enable BYOK TDE - Azure SQL Managed Instance
-description: "Learn how to configure an Azure SQL Managed Instance to start using BYOK Transparent Data Encryption (TDE) for encryption-at-rest using PowerShell."
-------- Previously updated : 11/05/2019-
-# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
-
-This Azure CLI script example configures Transparent Data Encryption (TDE) with customer-managed key for Azure SQL Managed Instance, using a key from Azure Key Vault. This is often referred to as a Bring Your Own Key scenario for TDE. To learn more about the TDE with customer-managed key, see [TDE Bring Your Own Key to Azure SQL](../../azure-sql/database/transparent-data-encryption-byok-overview.md).
-
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-## Sample script
-
-### Prerequisites
-
-An existing Managed Instance, see [Use Azure CLI to create an Azure SQL Managed Instance](sql-database-create-configure-managed-instance-cli.md).
-
-### Sign in to Azure
--
-```azurecli-interactive
-$subscription = "<subscriptionId>" # add subscription here
-
-az account set -s $subscription # ...or use 'az login'
-```
-
-### Run the script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/transparent-data-encryption/setup-tde-byok-sqlmi.sh "Set up BYOK TDE for SQL Managed Instance")]
-
-### Clean up deployment
-
-Use the following command to remove the resource group and all resources associated with it.
-
-```azurecli-interactive
-az group delete --name $resource
-```
-
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Description |
-|||
-| [az sql db](/cli/azure/sql/db) | Database commands. |
-| [az sql failover-group](/cli/azure/sql/failover-group) | Failover group commands. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional SQL Database CLI script samples can be found in the [Azure SQL Database documentation](../../azure-sql/database/az-cli-script-samples-content-guide.md).
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/build-configuration.md
jobs:
uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
- repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
+ repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments)
action: "upload" ###### Repository/Build Configurations ###### app_location: "src" # App source code path relative to repository root
static-web-apps Publish Jekyll https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-jekyll.md
To configure environment variables, such as `JEKYLL_ENV`, add an `env` section t
uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
- repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
+ repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments)
action: "upload" ###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-powershell.md
+
+ Title: Manage block blobs with PowerShell
+
+description: Manage blobs with PowerShell
++++ Last updated : 01/03/2022++
+# Manage block blobs with PowerShell
+
+Blob storage supports block blobs, append blobs, and page blobs. Block blobs are optimized for uploading large amounts of data efficiently. Block blobs are ideal for storing images, documents, and other types of data that isn't subjected to random read and write operations. This article explains how to work with block blobs.
+
+## Prerequisites
+
+- An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
+
+- Azure PowerShell module Az, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+
+### Configure a context object to encapsulate credentials
+
+Every request to Azure Storage must be authorized. You can authorize a request made from PS with your Azure AD account or by using the account access keys. The examples in this article use Azure AD authorization in conjunction with context objects. Context objects encapsulate your Azure AD credentials and pass them during subsequent data operations.
+
+To sign in to your Azure account with an Azure AD account, open PowerShell and call the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+```azurepowershell
+#Connect to your Azure subscription
+Connect-AzAccount
+```
+
+After the connection has been established, create the Azure context. Authenticating with Azure AD automatically creates an Azure context for your default subscription. In some cases, you may need to access resources in a different subscription after authenticating. To accomplish this, you can change the subscription associated with your current Azure session by modifying the active session context.
+
+To use your default subscription, create the context by calling the `New-AzStorageContext` cmdlet. Include the `-UseConnectedAccount` parameter so that data operations will be performed using your Azure AD credentials.
+
+```azurepowershell
+#Create a context object using Azure AD credentials
+$ctx = New-AzStorageContext -StorageAccountName <storage account name> -UseConnectedAccount
+```
+
+To change subscriptions, retrieve the context object with the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet, then change the current context with the [Set-AzContext](/powershell/module/az.accounts/set-azcontext). For more information, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps#change-the-active-subscription).
+
+### Create a container
+
+All blob data is stored within containers, so you'll need at least one container resource before you can upload data. If needed, use the following example to create a storage container. For more information, see [Managing blob containers using PowerShell](blob-containers-powershell.md).
+
+```azurepowershell
+#Create a container object
+$container = New-AzStorageContainer -Name "demo-container" -Context $ctx
+```
+
+When you use the following examples, you'll need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+## Upload a blob
+
+To upload a file to a block blob, pass the required parameter values to the `Set-AzStorageBlobContent` cmdlet. Supply the path and file name with the `-File` parameter, and the name of the container with the `-Container` parameter. You'll also need to provide a reference to the context object with the `-Context` parameter.
+
+This command creates the blob if it doesn't exist, or prompts for overwrite confirmation if it exists. You can overwrite the file without confirmation if you pass the `-Force` parameter to the cmdlet.
+
+The following example specifies a `-File` parameter value to upload a single, named file. It also demonstrates the use of the PowerShell pipeline operator and `Get-ChildItem` cmdlet to upload multiple files. The `Get-ChildItem` cmdlet uses the `-Path` parameter to specify *C:\Temp\\\*.png*. The inclusion of the asterisk (`*`) wildcard specifies all files with the *.png* filename extension. The `-Recurse` parameter searches the *Temp* directory and its subdirectories.
+
+```azurepowershell
+#Set variables
+$path = "C:\temp\"
+$containerName = "demo-container"
+$filename = "demo-file.txt"
+$imageFiles = $path + "*.png"
+$file = $path + $filename
+
+#Upload a single named file
+Set-AzStorageBlobContent -File $file -Container $containerName -Context $ctx
+
+#Upload multiple image files recursively
+ Get-ChildItem -Path $imageFiles -Recurse | Set-AzStorageBlobContent -Container $containerName -Context $ctx
+```
+
+The result displays the storage account name, the storage container name, and provides a list of the files uploaded.
+
+```Result
+ AccountName: demostorageaccount, ContainerName: demo-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+demo-file.txt BlockBlob 222 application/octet-stream 2021-12-14 01:38:03Z Cool False
+hello-world.png BlockBlob 14709 application/octet-stream 2021-12-14 01:38:03Z Cool False
+hello-world2.png BlockBlob 12472 application/octet-stream 2021-12-14 01:38:03Z Cool False
+hello-world3.png BlockBlob 13537 application/octet-stream 2021-12-14 01:38:03Z Cool False
+```
+
+## List blobs
+
+The `Get-AzStorageBlob` cmdlet is used to list blobs stored within a container. You can use various approaches to define the scope of your search. Use the `-Container` and `-Name` parameter to list a specific blob within a known container. To generate an unfiltered list of all blobs within a specific container, use the `-Container` parameter alone, without a `-Name` value.
+
+There's no restriction on the number of containers or blobs a storage account may have. To potentially avoid retrieving thousands of blobs, it's a good idea to limit the amount of data returned. When retrieving multiple blobs, you can use the `-Prefix` parameter to specify blobs whose names begin with a specific string. You may also use the `-Name` parameter in conjunction with a wildcard to specify file names or types.
+
+The `-MaxCount` parameter can be used to limit the number of unfiltered blobs returned from a container. A service limit of 5,000 is imposed on all Azure resources. This limit ensures that manageable amounts of data are retrieved and performance is not impacted. If the number of blobs returned exceeds either the `-MaxCount` value or the service limit, a continuation token is returned. This token allows you to use multiple requests to retrieve anynumber of blobs. More information is available on [Enumerating blob resources](/rest/api/storageservices/enumerating-blob-resources).
+
+The following example shows several approaches used to provide a list of blobs. The first approach lists a single blob within a specific container resource. The second approach uses a wildcard to list all `.jpg` files with a prefix of *louis*. The search is restricted to five containers using the `-MaxCount` parameter. The third approach uses `-MaxCount` and `-ContinuationToken` parameters to limit the retrieval of all blobs within a container.
+
+```azurepowershell
+#Set variables
+$namedContainer = "named-container"
+$demoContainer = "demo-container"
+$containerPrefix = "demo"
+
+$maxCount = 1000
+$total = 0
+$token = $Null
+
+#Approach 1: List all blobs in a named container
+Get-AzStorageBlob -Container $namedContainer -Context $ctx
+
+#Approach 2: Use a wildcard to list blobs in all containers
+Get-AzStorageContainer -MaxCount 5 -Context $ctx | Get-AzStorageBlob -Blob "*louis*.jpg"
+
+#Approach 3: List batches of blobs using MaxCount and ContinuationToken parameters
+Do
+{
+ #Retrieve blobs using the MaxCount parameter
+ $blobs = Get-AzStorageBlob -Container $demoContainer -MaxCount $maxCount -ContinuationToken $token -Context $ctx
+ $blobCount = 1
+
+ #Loop through the batch
+ Foreach ($blob in $blobs)
+ {
+ #To-do: Perform some work on individual blobs here
+
+ #Display progress bar
+ $percent = $($blobCount/$maxCount*100)
+ Write-Progress -Activity "Processing blobs" -Status "$percent% Complete" -PercentComplete $percent
+ $blobCount++
+ }
+
+ #Update $total
+ $total += $blobs.Count
+
+ #Exit if all blobs processed
+ If($blobs.Length -le 0) { Break; }
+
+ #Set continuation token to retrieve the next batch
+ $token = $blobs[$blobs.Count -1].ContinuationToken
+ }
+ While ($null -ne $token)
+ Write-Host "`n`n AccountName: $($ctx.StorageAccountName), ContainerName: $demoContainer `n"
+ Write-Host "Processed $total blobs in $namedContainer."
+```
+
+The first two approaches display the storage account and container names, and a list of the blobs retrieved. The third approach displays the total count of blobs within a named container. The blobs are retrieved in batches, and a status bar shows the progress during the count.
+
+```Result
+ AccountName: demostorageaccount, ContainerName: named-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+index.txt BlockBlob 222 text/plain 2021-12-15 22:00:10Z Cool False
+miles-davis.txt BlockBlob 23454 text/plain 2021-12-15 22:17:59Z Cool False
+cab-calloway.txt BlockBlob 18419 text/plain 2021-12-15 22:17:59Z Cool False
+benny-goodman.txt BlockBlob 17726 text/plain 2021-12-15 22:17:59Z Cool False
++
+ AccountName: demostorageaccount, ContainerName: demo-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+louis-armstrong.jpg BlockBlob 211482 image/jpeg 2021-12-14 01:38:03Z Cool False
+louis-jordan.jpg BlockBlob 55766 image/jpeg 2021-12-14 01:38:03Z Cool False
+louis-prima.jpg BlockBlob 290651 image/jpeg 2021-12-14 01:38:03Z Cool False
++
+ AccountName: demostorageaccount, ContainerName: demo-container
+
+Processed 5257 blobs in demo-container.
+```
+
+## Download a blob
+
+Depending on your use case, the `Get-AzStorageBlobContent` cmdlet can be used to download either single or multiple blobs. As with most operations, both approaches require a context object.
+
+To download a single named blob, you can call the cmdlet directly and supply values for the `-Blob` and `-Container` parameters. The blob will be downloaded to the working PowerShell directory by default, but an alternate location can be specified. To change the target location, a valid, existing path must be passed with the `-Destination` parameter. Because the operation can't create a destination, it will fail with an error if your specified path doesn't exist.
+
+Multiple blobs can be downloaded by combining the `Get-AzStorageBlob` cmdlet and the PowerShell pipeline operator. First, create a list of blobs with the `Get-AzStorageBlob` cmdlet. Next, use the pipeline operator and the `Get-AzStorageBlobContent` cmdlet to retrieve the blobs from the container.
+
+The following sample code provides an example of both single and multiple download approaches. It also offers a simplified approach to searching all containers for specific files using a wildcard. Because some environments may have hundreds of thousands of resources, using the `-MaxCount` parameter is recommended.
+
+```azurepowershell
+#Set variables
+$containerName = "demo-container"
+$path = "C:\temp\downloads\"
+$blobName = "demo-file.txt"
+$fileList = "*.png"
+$pipelineList = "louis*"
+$maxCount = 10
+
+#Download a single named blob
+Get-AzStorageBlobContent -Container $containerName -Blob $blobName -Destination $path -Context $ctx
+
+#Download multiple blobs using the pipeline
+Get-AzStorageBlob -Container $containerName -Blob $fileList -Context $ctx | Get-AzStorageBlobContent
+
+#Use wildcard to download blobs from all containers
+Get-AzStorageContainer -MaxCount $maxCount -Context $ctx | Get-AzStorageBlob -Blob "louis*" | Get-AzStorageBlobContent
+```
+
+The result displays the storage account and container names and provides a list of the files downloaded.
+
+```Result
+ AccountName: demostorageaccount, ContainerName: demo-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+demo-file.txt BlockBlob 222 application/octet-stream 2021-12-14 01:38:03Z Unknown False
+hello-world.png BlockBlob 14709 application/octet-stream 2021-12-14 01:38:03Z Unknown False
+hello-world2.png BlockBlob 12472 application/octet-stream 2021-12-14 01:38:03Z Unknown False
+hello-world3.png BlockBlob 13537 application/octet-stream 2021-12-14 01:38:03Z Unknown False
+
+ AccountName: demostorageaccount, ContainerName: public-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+louis-armstrong.jpg BlockBlob 211482 image/jpeg 2021-12-14 18:56:03Z Unknown False
+
+ AccountName: demostorageaccount, ContainerName: read-only-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+louis-jordan.jpg BlockBlob 55766 image/jpeg 2021-12-14 18:56:21Z Unknown False
+
+ AccountName: demostorageaccount, ContainerName: hidden-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+louis-prima.jpg BlockBlob 290651 image/jpeg 2021-12-14 18:56:45Z Unknown False
+```
+
+## Manage blob properties and metadata
+
+A container exposes both system properties and user-defined metadata. System properties exist on each Blob Storage resource. Some properties are read-only, while others can be read or set. Under the covers, some system properties map to certain standard HTTP headers.
+
+User-defined metadata consists of one or more name-value pairs that you specify for a Blob Storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.
+
+### Reading blob properties
+
+To read blob properties or metadata, you must first retrieve the blob from the service. Use the `Get-AzStorageBlob` cmdlet to retrieve a blob's properties and metadata, but not its content. Next, use the `BlobClient.GetProperties` method to fetch the blob's properties. The properties or metadata can then be read or set as needed.
+
+The following example retrieves a blob and lists its properties.
+
+```azurepowershell
+$blob = Get-AzStorageBlob -Blob "blue-moon.mp3" -Container "demo-container" -Context $ctx
+$properties = $blob.BlobClient.GetProperties()
+Echo $properties.Value
+```
+
+The result displays a list of the blob's properties as shown below.
+
+```Result
+LastModified : 11/16/2021 3:42:07 PM +00:00
+CreatedOn : 11/16/2021 3:42:07 PM +00:00
+Metadata : {}
+BlobType : Block
+LeaseDuration : Infinite
+LeaseState : Available
+LeaseStatus : Unlocked
+ContentLength : 2163298
+ContentType : audio/mpeg
+ETag : 0x8D9C0AA9E0CBA78
+IsServerEncrypted : True
+AccessTier : Cool
+IsLatestVersion : False
+TagCount : 0
+ExpiresOn : 1/1/0001 12:00:00 AM +00:00
+LastAccessed : 1/1/0001 12:00:00 AM +00:00
+HasLegalHold : False
+```
+
+### Read and write blob metadata
+
+Blob metadata is an optional set of name/value pairs associated with a blob. As shown in the previous example, there's no metadata associated with a blob initially, though it can be added when necessary. To update blob metadata, you'll use the `BlobClient.UpdateMetadata` method. This method only accepts key-value pairs stored in a generic `IDictionary` object. For more information, see the [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) class definition.
+
+The example below first updates and then commits a blob's metadata, and then retrieves it. The sample blob is flushed from memory to ensure the metadata isn't being read from the in-memory object.
+
+```azurepowershell
+#Set variable
+$container = "demo-container"
+$blobName = "blue-moon.mp3"
+
+#Retrieve blob
+$blob = Get-AzStorageBlob -Blob $blobName -Container $container -Context $ctx
+
+#Create IDictionary, add key-value metadata pairs to IDictionary
+$metadata = New-Object System.Collections.Generic.Dictionary"[String,String]"
+$metadata.Add("YearWritten","1934")
+$metadata.Add("YearRecorded","1958")
+$metadata.Add("Composer","Richard Rogers")
+$metadata.Add("Lyricist","Lorenz Hart")
+$metadata.Add("Artist","Tony Bennett")
+
+#Update metadata
+$blob.BlobClient.SetMetadata($metadata, $null)
+
+#Flush blob from memory, retrieve updated blob, retrieve properties
+$blob = $null
+$blob = Get-AzStorageBlob -Blob $blobName -Container $container -Context $ctx
+$properties = $blob.BlobClient.GetProperties()
+
+#Display metadata
+Echo $properties.Value.Metadata
+```
+
+The result returns the blob's newly updated metadata as shown below.
+
+```Result
+Key Value
+ --
+YearWritten 1934
+YearRecorded 1958
+Composer Richard Rogers
+Lyricist Lorenz Hart
+Artist Tony Bennett
+```
+
+## Copy operations for blobs
+
+There are many scenarios in which blobs of different types may be copied. Examples in this article are limited to block blobs.
+
+### Copy a source blob to a destination blob
+
+For a simplified copy operation within the same storage account, use the `Copy-AzStorageBlob` cmdlet. Because the operation is copying a blob within the same storage account, it's a synchronous operation. Cross-account operations are asynchronous.
+
+You should consider the use of AzCopy for ease and performance, especially when copying blobs between storage accounts. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Find out more about how to [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10).
+
+The example below copies the **bannerphoto.png** blob from the **photos** container to the **photos** folder within the **archive** container. Both containers exist within the same storage account. The result verifies the success of the copy operation.
+
+```azurepowershell
+$blobname = "bannerphoto.png"
+Copy-AzStorageBlob -SrcContainer "photos" -SrcBlob $blobname -DestContainer "archive" -DestBlob $("photos/$blobname") -Context $ctx
+
+AccountName: demostorageaccount, ContainerName: archive
+
+Name BlobType Length ContentType LastModified AccessTier SnapshotTime IsDeleted VersionId
+- -- -- -
+photos/bannerphoto BlockBlob 12472 image/png 2021-11-27 23:11:43Z Cool False
+```
+
+You can use the `-Force` parameter to overwrite an existing blob with the same name at the destination. This operation effectively replaces the destination blob. It also removes any uncommitted blocks and overwrites the destination blob's metadata.
+
+### Copy a snapshot to a destination blob with a different name
+
+The resulting destination blob is a writeable blob and not a snapshot.
+
+The source blob for a copy operation may be a block blob, an append blob, a page blob, or a snapshot. If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob will be overwritten.
+
+The destination blob can't be modified while a copy operation is in progress. A destination blob can only have one outstanding copy operation. In other words, a blob can't be the destination for multiple pending copy operations.
+
+When you copy a blob within the same storage account, it's a synchronous operation. When you copy across accounts it's an asynchronous operation.
+
+The entire source blob or file is always copied. Copying a range of bytes or set of blocks is not supported.
+
+When a blob is copied, it's system properties are copied to the destination blob with the same values.
+
+It also shows how to abort an asynchronous copy operation.
+
+## Snapshot blobs
+
+A snapshot is a read-only version of a blob that's taken at a point in time. A snapshot of a blob is identical to its base blob, except that the blob URI has a DateTime value appended to the blob URI to indicate the time at which the snapshot was taken. The only distinction between the base blob and the snapshot is the appended DateTime value.
+
+Any leases associated with the base blob do not affect the snapshot. You cannot acquire a lease on a snapshot. Read more about [Blob snapshots](snapshots-overview.md).
+
+The following sample code retrieves a blob from a storage container and creates a snapshot of it.
+
+```azurepowershell
+$blob = Get-AzStorageBlob -Container "manuscripts" -Blob "novels/fast-cars.docx" -Context $ctx
+$blob.BlobClient.CreateSnapshot()
+```
+
+## Set blob tier
+
+When you change a blob's tier, you move the blob and all of its data to the target tier. To do this, retrieve a blob with the `Get-AzStorageBlob` cmdlet, and call the `BlobClient.SetAccessTier` method. This can be used to change the tier between **Hot**, **Cool**, and **Archive**.
+
+Changing tiers from **Cool** or **Hot** to **Archive** take place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline and can't be read or modified. Before you can read or modify an archived blob's data, you'll need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+
+The following sample code sets the tier to **Hot** for all blobs within the `archive` container.
+
+```azurepowershell
+$blobs = Get-AzStorageBlob -Container archive -Context $ctx
+Foreach($blob in $blobs) {
+ $blob.BlobClient.SetAccessTier("Hot")
+}
+```
+
+## Operations using blob tags
+
+Blob index tags makes data management and discovery easier. Blob index tags are user-defined key-value index attributes that you can apply to your blobs. Once configured, you can categorize and find objects within an individual container or across all containers. Blob resources can be dynamically categorized by updating their index tags without requiring a change in container organization. This offers a flexible way to cope with changing data requirements. You can use both metadata and index tags simultaneously. For more information on index tags, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
+
+The following example illustrates how to add blob index tags to a series of blobs. The example reads data from an XML file and uses it to create index tags on several blobs. To use the sample code, create a local *blob-list.xml* file in your *C:\temp* directory. The XML data is provided below.
+
+```xml
+<Venue Name="House of Prime Rib" Type="Restaurant">
+ <Files>
+ <File path="transactions/12027121.csv" />
+ <File path="campaigns/radio-campaign.docx" />
+ <File path="photos/bannerphoto.png" />
+ <File path="archive/completed/2020review.pdf" />
+ <File path="logs/2020/01/01/logfile.txt" />
+ </Files>
+</Venue>
+```
+
+The sample code creates a hash table and assigns the **$tags** variable to it. Next, it uses the `Get-Content` and `Get-Data` cmdlets to create an object based on the XML structure. It then adds key-value pairs to the hash table to be used as the tag values. Finally, it iterates through the XML object and creates tags for each `File` node.
+
+```azurepowershell
+#Set variables
+$filePath = "C:\temp\blob-list.xml"
+$tags = @{}
+
+#Get data, set tag key-values
+[xml]$data = Get-Content -Path $filepath
+$tags.Add("VenueName", $data.Venue.Name)
+$tags.Add("VenueType", $data.Venue.Type)
+
+#Loop through files and add tag
+$data.Venue.Files.ChildNodes | ForEach-Object {
+ #break the path: container name, blob
+ $path = $_.Path -split "/",2
+
+ #set apply the blob tags
+ Set-AzStorageBlobTag -Container $location[0] -Blob $location[1] -Tag $tags -Context $ctx
+ }
+```
+
+## Delete blobs
+
+You can delete either a single blob or series of blobs with the `Remove-AzStorageBlob` cmdlet. When deleting multiple blobs, you can leverage conditional operations, loops, or the PowerShell pipeline as shown in the examples below.
+
+```azurepowershell
+#Create variables
+$containerName = "demo-container"
+$blobName = "demo-file.txt"
+$prefixName = "file"
+
+#Delete a single, named blob
+Remove-AzStorageBlob -Blob $blobName -Container $containerName -Context $ctx
+
+#Iterate a loop, deleting blobs
+for ($i = 1; $i -le 3; $i++) {
+ Remove-AzStorageBlob -Blob (-join($prefixName, $i, ".txt")) -Container $containerName -Context $ctx
+}
+
+#Retrieve blob list, delete using a pipeline
+Get-AzStorageBlob -Prefix $prefixName -Container $containerName -Context $ctx | Remove-AzStorageBlob
+```
+
+In some cases, it's possible to retrieve blobs that have been deleted. If your storage account's soft delete data protection option is enabled, the `-IncludeDeleted` parameter will return blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
+
+Use the following example to retrieve a list of blobs deleted within container's associated retention period. The result displays a list of recently deleted blobs.
+
+```azurepowershell
+#Retrieve a list of blobs including those recently deleted
+Get-AzStorageBlob -Prefix $prefixName -IncludeDeleted -Context $ctx
+
+AccountName: demostorageaccount, ContainerName: demo-container
+
+Name BlobType Length ContentType LastModified AccessTier IsDeleted
+- -- -- -
+file.txt BlockBlob 22 application/octet-stream 2021-12-16 20:59:41Z Cool True
+file2.txt BlockBlob 22 application/octet-stream 2021-12-17 00:14:24Z Cool True
+file3.txt BlockBlob 22 application/octet-stream 2021-12-17 00:14:24Z Cool True
+file4.txt BlockBlob 22 application/octet-stream 2021-12-17 00:14:25Z Cool True
+```
+
+## Restore a soft-deleted blob
+As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period.
+
+The following example explains how to restore a soft-deleted blob with the `BlobBaseClient.Undelete` method. Before you can follow this example, you'll need to enable soft delete and configure it on at least one of your storage accounts.
+
+To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
+
+```azurepowershell
+#Create variables
+$container = "demo-container"
+$prefix = "file"
+
+#Retrieve all blobs, filter deleted resources, restore deleted
+$blobs = Get-AzStorageBlob -Container "demo-container" -Prefix "file" -Context $ctx -IncludeDeleted
+Foreach($blob in $blobs)
+{
+ if($blob.IsDeleted) { $blob.BlobBaseClient.Undelete() }
+}
+```
+
+## Next steps
+
+- [Run PowerShell commands with Azure AD credentials to access blob data](/azure/storage/blobs/authorize-data-operations-powershell)
+- [Create a storage account](/azure/storage/common/storage-account-create?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal)
+- [Manage blob containers using PowerShell](blob-containers-powershell.md)
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/monitor-blob-storage.md
If you choose to archive your logs to a storage account, you'll pay for the volu
> [!div class="mx-imgBorder"] > ![Diagnostic settings page archive storage](media/monitor-blob-storage/diagnostic-logs-settings-pane-archive-storage.png)
-2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then select the **Save** button.
+2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, and then select the **Save** button.
[!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support in Azure Blo
| ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | | ||| aes192-cbc ||
+SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support in accordance to the Microsoft Security Development Lifecycle (SDL). We strongly recommend that customers utilize SDL approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
+ ## Security - Host keys are published [here](secure-file-transfer-protocol-host-keys.md). During the public preview, host keys will rotate up to once per month.
This article describes limitations and known issues of SFTP support in Azure Blo
- The account needs to be in a [supported regions](secure-file-transfer-protocol-support.md#regional-availability).
- - Customer's subscription needs to be signed up for the preview. See this.
+ - Customer's subscription needs to be signed up for the preview. To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) *and* request to join via 'Preview features' in the Azure portal.
- To resolve the `Home Directory not accessible error.` error, check that:
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-keys-manage.md
Previously updated : 12/09/2021 Last updated : 01/04/2022
To rotate an account's access keys, the user must either be a Service Administra
## Create a key expiration policy
-Before you can create a key expiration policy, you may need to rotate each of your account access keys at least once.
+A key expiration policy enables you to set a reminder for the rotation of the account access keys. The reminder is displayed if the specified interval has elapsed and the keys have not yet been rotated. After you create a key expiration policy, you can monitor your storage accounts for compliance to ensure that the account access keys are rotated regularly.
+
+> [!NOTE]
+> Before you can create a key expiration policy, you may need to rotate each of your account access keys at least once.
### [Portal](#tab/azure-portal)
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-planning.md
Using sysprep on a server that has the Azure File Sync agent installed is not su
### Windows Search If cloud tiering is enabled on a server endpoint, files that are tiered are skipped and not indexed by Windows Search. Non-tiered files are indexed properly.
+> [!Note]
+> Windows clients will cause recalls when searching the file share if the **Always search file names and contents** setting is enabled on the client machine. This setting is disabled by default.
+ ### Other Hierarchical Storage Management (HSM) solutions No other HSM solutions should be used with Azure File Sync.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-scale-targets.md
Azure supports multiple types of storage accounts for different storage scenario
<sup>2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).</sup> ## Azure File Sync scale targets
-The following table indicates the boundaries of Microsoft's testing and also indicates which targets are hard limits:
+The following table indicates the boundaries of Microsoft's testing (soft limits) and also indicates which targets are hard limits:
| Resource | Target | Hard limit | |-|--||
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-troubleshooting-files-performance.md
To confirm, you can use Azure Metrics in the portal -
To learn more about configuring alerts in Azure Monitor, see [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md).
+## Slow performance when unzipping files in SMB file shares
+Depending on the exact compression method and unzip operation used, decompression operations may perform more slowly on an Azure file share than on your local disk. This is often because unzipping tools perform a number of metadata operations in the process of performing the decompression of a compressed archive. For the best performance, we recommend copying the compressed archive from the Azure file share to your local disk, unzipping there, and then using a copy tool such as Robocopy (or AzCopy) to copy back to the Azure file share. Using a copy tool like Robocopy can compensate for the decreased performance of metadata operations in Azure Files relative to your local disk by using multiple threads to copy data in parallel.
+ ## How to create alerts if a premium file share is trending toward being throttled 1. In the Azure portal, go to your storage account.
stream-analytics Stream Analytics Test Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-test-query.md
Previously updated : 3/6/2020 Last updated : 01/03/2022
Instead of using live data, you can use sample data from a local file to test yo
6. The sample data API is throttled after five requests in a 15-minute window. After the end of the 15-minute window, you can do more sample data requests. This limitation is applied at the subscription level. ## Troubleshooting-
-1. If you get this error ΓÇ£There was a network connectivity issue when fetching the results. Please check your network and firewall settings.ΓÇ¥, follow the steps below:
-
- * To check the connection to the service, open [https://queryruntime.azurestreamanalytics.com/api/home/index](https://queryruntime.azurestreamanalytics.com/api/home/index) in a browser. If you cannot open this link, then update your firewall settings.
-
-2. If you get this error "The request size is too big. Please reduce the input data size and try again.", follow the steps below:
+If you get this error "The request size is too big. Please reduce the input data size and try again.", follow the steps below:
* Reduce input size ΓÇô Test your query with smaller size sample file or with a smaller time range. * Reduce query size ΓÇô To test a selection of query, select a portion of query then click **Test selected query**.
synapse-analytics How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/how-to-access-secured-purview-account.md
Last updated 09/02/2021 -+ # Access a secured Azure Purview account from Azure Synapse Analytics
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
Last updated 09/29/2021 -+
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Last updated 09/07/2021 -+ # Data integration in Azure Synapse Analytics versus Azure Data Factory
synapse-analytics Data Integration Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/data-integration-data-lake.md
Last updated 04/15/2020 -+ # Ingest data into Azure Data Lake Storage Gen2
synapse-analytics Data Integration Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/data-integration-sql-pool.md
Last updated 11/03/2020 -+ # Ingest data into a dedicated SQL pool
synapse-analytics Linked Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/linked-service.md
Last updated 04/15/2020 -+ # Secure a linked service with Private Links
synapse-analytics Sql Pool Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/sql-pool-stored-procedure-activity.md
-+ Last updated 05/13/2021
synapse-analytics Get Started Add Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-add-admin.md
description: In this tutorial, you'll learn how to add another administrative us
--+
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-spark.md
description: In this tutorial, you'll learn to analyze data with Apache Spark.
--+
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
description: In this tutorial, you'll learn how to analyze data with a serverles
--+
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-sql-pool.md
description: In this tutorial, you'll use the NYC Taxi sample data to explore SQ
--+
synapse-analytics Get Started Analyze Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-storage.md
description: In this tutorial, you'll learn how to analyze data located in a sto
--+
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-create-workspace.md
description: In this tutorial, you'll learn how to create a Synapse workspace, a
--+
synapse-analytics Get Started Knowledge Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-knowledge-center.md
description: In this tutorial, you'll learn how to use the Synapse Knowledge cen
--+
synapse-analytics Get Started Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-monitor.md
description: In this tutorial, you'll learn how to monitor activities in your Sy
--+
synapse-analytics Get Started Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-pipelines.md
description: In this tutorial, you'll learn how to integrate pipelines and activ
--+
synapse-analytics Get Started Visualize Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-visualize-power-bi.md
description: In this tutorial, you'll learn how to use Power BI to visualize dat
--+
synapse-analytics Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started.md
description: In this tutorial, you'll learn the basic steps to set up and use Az
--+
synapse-analytics How To Analyze Complex Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/how-to-analyze-complex-schema.md
Last updated 06/15/2020 -+ # Analyze complex data types in Azure Synapse Analytics
synapse-analytics How To Move Workspace From One Region To Another https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md
Last updated 08/16/2021 -+ # Move an Azure Synapse Analytics workspace from one region to another
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
-+ Last updated 06/30/2021
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
-+ Last updated 12/16/2021
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-automl.md
-+ Last updated 09/03/2021
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
-+ Last updated 07/01/2021
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
-+ Last updated 11/20/2020
synapse-analytics Tutorial Configure Cognitive Services Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md
-+ Last updated 11/20/2020
synapse-analytics Tutorial Score Model Predict Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md
-+ Last updated 11/02/2021
synapse-analytics Tutorial Sql Pool Model Scoring Wizard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard.md
-+ Last updated 09/25/2020
synapse-analytics What Is Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/what-is-machine-learning.md
-+ Last updated 10/01/2021
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
ms.devlang:
-+ Last updated 03/10/2021 # Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-terminology.md
Last updated 11/02/2021 -+
synapse-analytics Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-what-is.md
Last updated 11/02/2021 -+
synapse-analytics Compatibility Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/compatibility-issues.md
Last updated 11/18/2020 -+ # Compatibility issues with third-party applications and Azure Synapse Analytics
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/system-integration.md
Last updated 11/24/2020 -+ # Azure Synapse Analytics system integration partners
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-apache-spark-notebook.md
description: This quickstart shows how to use the web tools to create a serverle
-+
synapse-analytics Quickstart Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-connect-azure-data-explorer.md
Last updated 10/07/2020 -+
synapse-analytics Quickstart Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-connect-synapse-link-cosmos-db.md
Last updated 04/21/2020 -+
synapse-analytics Quickstart Create Apache Spark Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-apache-spark-pool-portal.md
Last updated 08/19/2021 -+
synapse-analytics Quickstart Create Apache Spark Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-apache-spark-pool-studio.md
Last updated 10/16/2020 -+
synapse-analytics Quickstart Create Sql Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-sql-pool-portal.md
Last updated 04/15/2020 -+
synapse-analytics Quickstart Create Sql Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-sql-pool-studio.md
Last updated 10/16/2020 -+
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace-cli.md
Last updated 08/25/2020 -+
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace-powershell.md
Last updated 10/19/2020 -+
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace.md
Last updated 09/03/2020 -+
synapse-analytics Quickstart Load Studio Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-load-studio-sql-pool.md
Last updated 12/11/2020 -+
synapse-analytics Quickstart Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-power-bi.md
Last updated 10/27/2020 -+
synapse-analytics Quickstart Read From Gen2 To Pandas Dataframe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-read-from-gen2-to-pandas-dataframe.md
-+ Last updated 03/23/2021
synapse-analytics Quickstart Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-serverless-sql-pool.md
Last updated 04/15/2020 -+
synapse-analytics Connect To A Secure Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/connect-to-a-secure-storage-account.md
Last updated 02/10/2021 -+ # Connect to a secure Azure storage account from your Synapse workspace
synapse-analytics How To Connect To Workspace From Restricted Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network.md
Last updated 10/25/2020 -+ # Connect to workspace resources from a restricted network
synapse-analytics How To Connect To Workspace With Private Links https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md
Last updated 04/15/2020 -+ # Connect to your Azure Synapse workspace using private links
synapse-analytics How To Create A Workspace With Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-create-a-workspace-with-data-exfiltration-protection.md
Last updated 12/01/2020 -+ # Create a workspace with data exfiltration protection enabled
synapse-analytics How To Create Managed Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-create-managed-private-endpoints.md
Last updated 04/15/2020 -+ # Create a Managed private endpoint to your data source
synapse-analytics How To Grant Workspace Managed Identity Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions.md
Last updated 04/15/2020 -+
synapse-analytics How To Manage Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md
Last updated 12/1/2020 -+ # How to manage Synapse RBAC role assignments in Synapse Studio
synapse-analytics How To Review Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-review-synapse-rbac-role-assignments.md
Last updated 12/1/2020 -+ # How to review Synapse RBAC role assignments
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-set-up-access-control.md
Last updated 8/05/2021 -+
synapse-analytics Synapse Private Link Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-private-link-hubs.md
Last updated 12/01/2020 -+ # Connect to Azure Synapse Studio using Azure Private Link Hubs
synapse-analytics Synapse Workspace Managed Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-managed-private-endpoints.md
Last updated 01/12/2020 -+ # Synapse Managed private endpoints
synapse-analytics Synapse Workspace Synapse Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-synapse-rbac.md
Last updated 12/1/2020 -+ # What is Synapse role-based access control (RBAC)?
synapse-analytics Workspace Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/workspace-data-exfiltration-protection.md
Last updated 12/01/2020 -+ # Data exfiltration protection for Azure Synapse Analytics workspaces This article will explain data exfiltration protection in Azure Synapse Analytics
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/workspaces-encryption.md
Last updated 07/20/2021 -+
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
description: Learn how to enable the Synapse Studio connector for collecting and
-+
synapse-analytics Apache Spark Azure Machine Learning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-machine-learning-tutorial.md
Last updated 06/30/2020 -+ # Tutorial: Train a model in Python with automated machine learning
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
Last updated 03/01/2020 -+
synapse-analytics Apache Spark Custom Conda Channel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-custom-conda-channel.md
Last updated 08/11/2021 -+
synapse-analytics Apache Spark Machine Learning Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-machine-learning-concept.md
Last updated 11/13/2020 -+ # Machine learning with Apache Spark
synapse-analytics Apache Spark Machine Learning Mllib Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook.md
description: A tutorial on how to use Apache Spark MLlib to create a machine lea
-+ Last updated 04/15/2020
synapse-analytics Apache Spark Manage Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-manage-python-packages.md
Last updated 02/26/2020 -+
synapse-analytics Apache Spark Manage Scala Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-manage-scala-packages.md
Last updated 02/26/2020 -+
synapse-analytics Apache Spark Notebook Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-notebook-concept.md
Last updated 11/18/2020 -+
synapse-analytics Apache Spark To Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-to-power-bi.md
description: This tutorial provides an overview on how to create a Power BI dash
-+
synapse-analytics Azure Synapse Diagnostic Emitters Azure Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-eventhub.md
description: In this tutorial, you learn how to use the Synapse Apache Spark dia
-+
synapse-analytics Azure Synapse Diagnostic Emitters Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-storage.md
description: This article shows how to use the Synapse Spark diagnostic emitter
-+
synapse-analytics Connect Monitor Azure Synapse Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/connect-monitor-azure-synapse-spark-application-level-metrics.md
description: Tutorial - Learn how to integrate your existing on-premises Prometh
-+
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/intellij-tool-synapse.md
description: Tutorial - Use the Azure Toolkit for IntelliJ to develop Spark appl
-+
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/spark-dotnet.md
Last updated 05/01/2020 -+ # Use .NET for Apache Spark with Azure Synapse Analytics
synapse-analytics Tutorial Spark Pool Filesystem Spec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/tutorial-spark-pool-filesystem-spec.md
-+ Last updated 11/02/2021
synapse-analytics Tutorial Use Pandas Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md
-+ Last updated 11/02/2021
synapse-analytics Use Prometheus Grafana To Monitor Apache Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md
description: Tutorial - Learn how to deploy the Apache Spark application metrics
-+
synapse-analytics Vscode Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/vscode-tool-synapse.md
description: Tutorial - Use the Spark & Hive Tools for VSCode to develop Spark a
-+
synapse-analytics Gen2 Migration Schedule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/gen2-migration-schedule.md
description: Instructions for migrating an existing dedicated SQL pool (formerly
-+ ms.assetid: 04b05dea-c066-44a0-9751-0774eb84c689
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
Last updated 02/02/2019 -+ # Use maintenance schedules to manage service updates and maintenance
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
Last updated 04/04/2021 -+
synapse-analytics Quickstart Bulk Load Copy Tsql Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql-examples.md
Last updated 07/10/2020 -+
synapse-analytics Quickstart Bulk Load Copy Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md
Last updated 11/20/2020 -+
synapse-analytics Quickstart Configure Workload Isolation Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-portal.md
-+ Last updated 05/04/2020
synapse-analytics Quickstart Configure Workload Isolation Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-tsql.md
Last updated 04/27/2020 -+
synapse-analytics Quickstart Create A Workload Classifier Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-portal.md
-+ Last updated 05/04/2020
synapse-analytics Quickstart Create A Workload Classifier Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-tsql.md
Last updated 02/04/2020 -+
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
-+ Last updated 04/28/2020
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Last updated 4/30/2020 -+ tags: azure-synapse
synapse-analytics Resource Classes For Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management.md
Last updated 02/04/2020 -+
synapse-analytics Sql Data Warehouse Concept Resource Utilization Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md
Last updated 04/09/2020 -+
synapse-analytics Sql Data Warehouse How To Configure Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-configure-workload-importance.md
Last updated 05/15/2020 -+
synapse-analytics Sql Data Warehouse How To Manage And Monitor Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md
Last updated 02/04/2020 -+
synapse-analytics Sql Data Warehouse Monitor Workload Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md
Last updated 02/04/2020 -+ # Monitor workload - Azure portal
synapse-analytics Sql Data Warehouse Predict https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
Last updated 07/21/2020 -+
synapse-analytics Sql Data Warehouse Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-collation-types.md
Last updated 12/04/2019 -+
synapse-analytics Sql Data Warehouse Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md
Last updated 01/06/2020 -+
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Last updated 02/04/2020 -+
synapse-analytics Sql Data Warehouse Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-importance.md
Last updated 02/04/2020 -+
synapse-analytics Sql Data Warehouse Workload Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-isolation.md
Last updated 11/16/2021 -+
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
Last updated 03/01/2021 -+
synapse-analytics Sql Data Warehouse Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management.md
Last updated 02/04/2020 -+
synapse-analytics Upgrade To Latest Generation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/upgrade-to-latest-generation.md
Last updated 02/19/2019 -+
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
Last updated 11/25/2020 -+ # Enabling Synapse workspace features for a dedicated SQL pool (formerly SQL DW)
synapse-analytics Workspace Connected Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/workspace-connected-experience.md
Last updated 11/23/2020 -+ # Enabling Synapse workspace features on an existing dedicated SQL pool (formerly SQL DW)
synapse-analytics Workspace Connected Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/workspace-connected-regions.md
Last updated 11/11/2020 -+
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/active-directory-authentication.md
Last updated 04/15/2020 -+ # Use Azure Active Directory Authentication for authentication with Synapse SQL
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Last updated 05/01/2020 -+ # Best practices for serverless SQL pool in Azure Synapse Analytics
synapse-analytics Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/connect-overview.md
Last updated 04/15/2020 -+
synapse-analytics Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/connection-strings.md
Last updated 04/15/2020 -+
synapse-analytics Create External Table As Select https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-external-table-as-select.md
Last updated 04/15/2020 -+ # Store query results to storage using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-use-external-tables.md
Last updated 07/23/2021 -+
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-use-views.md
Last updated 05/20/2020 -+
synapse-analytics Data Processed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/data-processed.md
Last updated 11/05/2020 -+ # Cost management for serverless SQL pool in Azure Synapse Analytics
synapse-analytics Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-dynamic-sql.md
Last updated 04/15/2020 -+
synapse-analytics Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-group-by-options.md
Last updated 04/15/2020 -+
synapse-analytics Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-label.md
Last updated 04/15/2020 -+
synapse-analytics Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-loops.md
Last updated 04/15/2020 -+ # Use T-SQL loops with Synapse SQL in Azure Synapse Analytics
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-openrowset.md
Last updated 11/02/2021 -+
synapse-analytics Develop Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-overview.md
Last updated 04/15/2020 -+ # Design decisions and coding techniques for Synapse SQL features in Azure Synapse Analytics
synapse-analytics Develop Storage Files Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-overview.md
Last updated 04/19/2020 -+ # Access external storage using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Develop Storage Files Spark Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-spark-tables.md
Last updated 10/05/2021 -+ # Synchronize Apache Spark for Azure Synapse external table definitions in serverless SQL pool
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
Last updated 06/11/2020 -+
synapse-analytics Develop Tables Cetas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-cetas.md
Last updated 09/15/2020 -+ # CETAS with Synapse SQL
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-data-types.md
Last updated 04/15/2020 -+
synapse-analytics Develop Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-overview.md
Last updated 04/15/2020 -+ # Design tables using Synapse SQL in Azure Synapse Analytics
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-statistics.md
Last updated 04/19/2020 -+ # Statistics in Synapse SQL
synapse-analytics Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-user-defined-schemas.md
Last updated 04/15/2020 -+
synapse-analytics Develop Variable Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-variable-assignment.md
Last updated 04/15/2020 -+ # Assign variables with Synapse SQL
synapse-analytics Develop Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-views.md
Last updated 04/15/2020 -+ # T-SQL views with dedicated SQL pool and serverless SQL pool in Azure Synapse Analytics
synapse-analytics Get Started Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-azure-data-studio.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with Azure Data Studio
synapse-analytics Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-connect-sqlcmd.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with sqlcmd
synapse-analytics Get Started Power Bi Professional https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-power-bi-professional.md
Last updated 04/15/2020 -+
synapse-analytics Get Started Ssms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-ssms.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with SQL Server Management Studio (SSMS)
synapse-analytics Get Started Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-visual-studio.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with Visual Studio and SSDT
synapse-analytics Mfa Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/mfa-authentication.md
Last updated 04/15/2020 -+
synapse-analytics On Demand Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/on-demand-workspace-overview.md
Last updated 04/15/2020 -+ # Serverless SQL pool in Azure Synapse Analytics
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Last updated 04/15/2020 -+ # Transact-SQL features supported in Azure Synapse SQL
Consumption models in Synapse SQL enable you to use different database objects.
| **Schemas** | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true), schemas are supported. | | **Temporary tables** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-temporary.md?context=/azure/synapse-analytics/context/context) | No, temporary tables might be used just to store some information from system views. | | **User defined procedures** | [Yes](/sql/t-sql/statements/create-procedure-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, stored procedures can be placed in any user databases (not `master` database). |
-| **User defined functions** | [Yes](/sql/t-sql/statements/create-function-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | Yes, only inline table-valued functions. Scalar user defined functions are not supported. |
+| **User defined functions** | [Yes](/sql/t-sql/statements/create-function-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | Yes, only inline table-valued functions. Scalar user-defined functions are not supported. |
| **Triggers** | No | No, serverless SQL pools do not allow changing data, so the triggers cannot react on data changes. | | **External tables** | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See supported [data formats](#data-formats). | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See the supported [data formats](#data-formats). | | **Caching queries** | Yes, multiple forms (SSD-based caching, in-memory, resultset caching). In addition, Materialized View are supported | No. Only file statistics are cached. | | **Table variables** | [No](/sql/t-sql/data-types/table-transact-sql?view=azure-sqldw-latest&preserve-view=true), use temporary tables | No, table variables are not supported. | | **[Table distribution](../sql-data-warehouse/sql-data-warehouse-tables-distribute.md?context=/azure/synapse-analytics/context/context)** | Yes | No, table distributions are not supported. | | **[Table indexes](../sql-data-warehouse/sql-data-warehouse-tables-index.md?context=/azure/synapse-analytics/context/context)** | Yes | No, indexes are not supported. |
-| **[Table partitions](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context)** | Yes | No, only external tables that are synchronized from the Apache Spark pools can be partitioned per folders. |
-| **[Statistics](develop-tables-statistics.md)** | Yes | Yes |
+| **Table partitioning** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context). | No. You can partition files using Hive-partition folder structure and create partitioned tables in Spark. The Spark partitioning will be [synchronized with the serverless pool](../metadat#partitioned-views) on folder partition structure, but external tables cannot be created on partitioned folders. |
+| **[Statistics](develop-tables-statistics.md)** | Yes | Yes, statistics are [created on external files](develop-tables-statistics.md#statistics-in-serverless-sql-pool). |
| **Workload management, resource classes, and concurrency control** | Yes, see [workload management, resource classes, and concurrency control](../sql-data-warehouse/resource-classes-for-workload-management.md?context=/azure/synapse-analytics/context/context). | No, serverless SQL pool automatically manages the resources. | | **Cost control** | Yes, using scale-up and scale-down actions. | Yes, using [the Azure portal or T-SQL procedure](./data-processed.md#cost-control). |
Data that is analyzed can be stored on various storage types. The following tabl
| **Internal storage** | Yes | No, data is placed in Azure Data Lake or cosmos DB analytical storage. | | **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. | | **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. |
-| **Azure SQL (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance) |
+| **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). |
| **Dataverse** | No | Yes, using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). | | **Azure CosmosDB transactional storage** | No | No, use Spark pools to update the Cosmos DB transactional storage. | | **Azure CosmosDB analytical storage** | No | Yes, using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) |
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
Last updated 03/02/2021 -+
synapse-analytics Query Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-data-storage.md
Last updated 04/15/2020 -+ # Query storage files with serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-delta-lake-format.md
Last updated 07/15/2021 -+
synapse-analytics Query Folders Multiple Csv Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-folders-multiple-csv-files.md
Last updated 04/15/2020 -+
synapse-analytics Query Json Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-json-files.md
Last updated 05/20/2020 -+ # Query JSON files using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Parquet Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-parquet-files.md
Last updated 05/20/2020 -+ # Query Parquet files using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Parquet Nested Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-parquet-nested-types.md
Last updated 05/20/2020 -+ # Query nested types in Parquet and JSON files by using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Single Csv File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-single-csv-file.md
Last updated 05/20/2020 -+ # Query CSV files
synapse-analytics Query Specific Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-specific-files.md
Last updated 05/20/2020 -+ # Use file metadata in serverless SQL pool queries
synapse-analytics Reference Tsql System Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/reference-tsql-system-views.md
Last updated 04/15/2020 -+ # System views supported in Synapse SQL
synapse-analytics Resource Consumption Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resource-consumption-models.md
Last updated 04/15/2020 -+
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Last updated 9/23/2021 -+
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/sql-authentication.md
Last updated 04/15/2020 -+ # SQL Authentication
synapse-analytics Tutorial Connect Power Bi Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-connect-power-bi-desktop.md
Last updated 05/20/2020 -+ # Tutorial: Use serverless SQL pool with Power BI Desktop & create a report
synapse-analytics Tutorial Data Analyst https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-data-analyst.md
Last updated 11/20/2020 -+ # Tutorial: Explore and Analyze data lakes with serverless SQL pool
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
Last updated 08/20/2021 -+ # Tutorial: Create Logical Data Warehouse with serverless SQL pool
synapse-analytics Concept Synapse Link Cosmos Db Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md
Last updated 06/02/2021 -+
synapse-analytics How To Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md
Last updated 03/02/2021 -+
synapse-analytics How To Copy To Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md
Last updated 08/10/2020 -+
synapse-analytics How To Query Analytical Store Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md
Last updated 11/02/2021 -+
synapse-analytics Troubleshoot Synapse Studio And Storage Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-and-storage-connectivity.md
Last updated 11/11/2020 -+ # Troubleshoot connectivity between Azure Synapse Analytics Synapse Studio and storage
synapse-analytics Troubleshoot Synapse Studio Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-powershell.md
Last updated 10/30/2020 -+ # Troubleshoot Synapse Studio connectivity with PowerShell
synapse-analytics Troubleshoot Synapse Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio.md
Last updated 04/15/2020 -+ # Synapse Studio troubleshooting
virtual-desktop Store Fslogix Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/store-fslogix-profile.md
Title: Storage FSLogix profile container Azure Virtual Desktop - Azure
description: Options for storing your Azure Virtual Desktop FSLogix profile on Azure Storage. Previously updated : 04/27/2021 Last updated : 01/04/2021
To learn more about FSLogix profile containers, user profile disks, and other us
If you're ready to create your own FSLogix profile containers, get started with one of these tutorials: -- [Getting started with FSLogix profile containers on Azure Files in Azure Virtual Desktop](create-file-share.md)
+- [Create an Azure file share with a domain controller](create-file-share.md)
+- [Create an Azure file share with Azure Active Directory](create-profile-container-azure-ad.md)
+- [Create an Azure file share with Azure Active Directory Domain Services](create-profile-container-adds.md)
- [Create an FSLogix profile container for a host pool using Azure NetApp files](create-fslogix-profile-container.md) - The instructions in [Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in Azure](/windows-server/remote/remote-desktop-services/rds-storage-spaces-direct-deployment/) also apply when you use an FSLogix profile container instead of a user profile disk
virtual-machines Capacity Reservation Associate Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-associate-vm.md
Title: Associate a virtual machine to a Capacity Reservation group (preview)
description: Learn how to associate a new or existing virtual machine to a Capacity Reservation group. -+ Previously updated : 08/09/2021 Last updated : 01/03/2022
In the request body, include the `capacityReservationGroup` property as shown be
1. Choose an **Image** and the **VM size** 1. Under *Administrator account*, provide a **username** and a **password** 1. The password must be at least 12 characters long and meet the defined complexity requirements
-1. Under *Inbound port rules*, choose **Allow selected ports** and then select **RDP** (3389) and **HTTP** (80) from the drop-down
1. Go to the *Advanced section* 1. In the **Capacity Reservations** dropdown, select the capacity reservation group that you want the VM to be associated with 1. Select the **Review + create** button
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/custom-data.md
- Title: Custom data and Azure Virtual Machines
- description: Details on using Custom data and Cloud-Init on Azure Virtual Machines
+ Title: Custom data and Azure virtual machines
+ description: This article gives details on using custom data and cloud-init on Azure virtual machines.
-# Custom data and Cloud-Init on Azure Virtual Machines
+# Custom data and cloud-init on Azure Virtual Machines
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-You may need to inject a script or other metadata into a Microsoft Azure virtual machine at provisioning time. In other clouds, this concept is often referred to as user data. In Microsoft Azure, we have a similar feature called custom data.
+You might need to inject a script or other metadata into a Microsoft Azure virtual machine (VM) at provisioning time. In other clouds, this concept is often called *user data*. Microsoft Azure has a similar feature called *custom data*.
-Custom data is only made available to the VM during first boot/initial setup, we call this 'provisioning'. Provisioning is the process where VM Create parameters (for example, hostname, username, password, certificates, custom data, keys etc.) are made available to the VM and a provisioning agent processes them, such as the [Linux Agent](./extensions/agent-linux.md) and [cloud-init](./linux/using-cloud-init.md#troubleshooting-cloud-init).
+Custom data is made available to the VM during first startup or setup, which is called *provisioning*. Provisioning is the process where VM creation parameters (for example, host name, username, password, certificates, custom data, and keys) are made available to the VM. A provisioning agent, such as the [Linux Agent](./extensions/agent-linux.md) or [cloud-init](./linux/using-cloud-init.md#troubleshooting-cloud-init), processes those parameters.
+## Pass custom data to the VM
+To use custom data, you must Base64-encode the contents before passing the data to the API--unless you're using a CLI tool that does the conversion for you, such as the Azure CLI. The size can't exceed 64 KB.
-## Passing custom data to the VM
-To use custom data, you must base64 encode the contents first before passing it to the API, unless you are using a CLI tool that does the conversion for you, such as AZ CLI. The size cannot exceed 64 KB.
+In the CLI, you can pass your custom data as a file, as the following example shows. The file will be converted to Base64.
-In CLI, you can pass your custom data as a file, and it will be converted to base64.
```bash az vm create \ --resource-group myResourceGroup \
az vm create \
--generate-ssh-keys ```
-In Azure Resource Manager (ARM), there is a [base64 function](../azure-resource-manager/templates/template-functions-string.md#base64).
+In Azure Resource Manager, there's a [base64 function](../azure-resource-manager/templates/template-functions-string.md#base64):
```json "name": "[parameters('virtualMachineName')]",
In Azure Resource Manager (ARM), there is a [base64 function](../azure-resource-
}, ```
-## Processing custom data
-The provisioning agents installed on the VMs handle interfacing with the platform and placing it on the file system.
+## Process custom data
+The provisioning agents installed on the VMs handle communication with the platform and placing data on the file system.
### Windows
-Custom data is placed in *%SYSTEMDRIVE%\AzureData\CustomData.bin* as a binary file, but it is not processed. If you wish to process this file, you will need to build a custom image, and write code to process the CustomData.bin.
+Custom data is placed in *%SYSTEMDRIVE%\AzureData\CustomData.bin* as a binary file, but it isn't processed. If you want to process this file, you'll need to build a custom image and write code to process *CustomData.bin*.
### Linux
-On Linux OS's, custom data is passed to the VM via the ovf-env.xml file, which is copied to the */var/lib/waagent* directory during provisioning. Newer versions of the Microsoft Azure Linux Agent will also copy the base64-encoded data to */var/lib/waagent/CustomData* as well for convenience.
+On Linux operating systems, custom data is passed to the VM via the *ovf-env.xml* file. That file is copied to the */var/lib/waagent* directory during provisioning. Newer versions of the Linux Agent will also copy the Base64-encoded data to */var/lib/waagent/CustomData* for convenience.
Azure currently supports two provisioning agents:
-* Linux Agent - By default the agent will not process custom data, you will need to build a custom image with it enabled. The relevant settings, as per the [documentation](https://github.com/Azure/WALinuxAgent#configuration) are:
- * Provisioning.DecodeCustomData
- * Provisioning.ExecuteCustomData
-When you enable custom data, and execute a script, it will delay the VM reporting that is it ready or that provisioning has succeeded until the script has completed. If the script exceeds the total VM provisioning time allowance of 40 mins, the VM Create will fail. Note, if the script fails to execute, or errors during executing, it is not deemed a fatal provisioning failure, you will need to create a notification path to alert you for the completion state of the script.
+* **Linux Agent**. By default, the agent won't process custom data. You need to build a custom image with the data enabled. The [relevant settings](https://github.com/Azure/WALinuxAgent#configuration) are:
+
+ * `Provisioning.DecodeCustomData`
+ * `Provisioning.ExecuteCustomData`
-To troubleshoot custom data execution, review */var/log/waagent.log*
+ When you enable custom data and run a script, it will delay the VM reporting that it's ready or that provisioning has succeeded until the script has finished. If the script exceeds the total VM provisioning time allowance of 40 minutes, VM creation will fail.
+
+ If the script fails to run, or errors happen during execution, that's not a fatal provisioning failure. You'll need to create a notification path to alert you for the completion state of the script.
-* cloud-init - By default will process custom data by default, cloud-init accepts [multiple formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html) of custom data, such as cloud-init configuration, scripts etc. Similar to the Linux Agent, when cloud-init processes the custom data. If there are errors during execution of the configuration processing or scripts, it is not deemed a fatal provisioning failure, and you will need to create a notification path to alert you for the completion state of the script. However, different to the Linux Agent, cloud-init does not wait on user custom data configurations to complete before reporting to the platform that the VM is ready. For more information on cloud-init on azure, review the [documentation](./linux/using-cloud-init.md).
+ To troubleshoot custom data execution, review */var/log/waagent.log*.
+* **cloud-init**. By default, this agent will process custom data. It accepts [multiple formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html) of custom data, such as cloud-init configuration and scripts.
-To troubleshoot custom data execution, review the troubleshooting [documentation](./linux/using-cloud-init.md#troubleshooting-cloud-init).
+ Similar to the Linux Agent, if errors happen during execution of the configuration processing or scripts when cloud-init is processing the custom data, that's not a fatal provisioning failure. You'll need to create a notification path to alert you for the completion state of the script.
+
+ However, unlike the Linux Agent, cloud-init doesn't wait for custom data configurations from the user to finish before reporting to the platform that the VM is ready. For more information on cloud-init on Azure, including troubleshooting, see [cloud-init support for virtual machines in Azure](./linux/using-cloud-init.md).
## FAQ ### Can I update custom data after the VM has been created?
-For single VMs, custom data in the VM model cannot be updated, but for VMSS, you can update VMSS custom data via [REST API](/rest/api/compute/virtualmachinescalesets/update), [Az CLI](/cli/azure/vmss#az_vmss_update), or [Az PowerShell](/powershell/module/az.compute/update-azvmss). When you update custom data in the VMSS model:
-* Existing instances in the VMSS will not get the updated custom data, only until they are reimaged.
-* Existing instances in the VMSS that are upgraded will not get the updated custom data.
+For single VMs, you can't update custom data in the VM model. But for virtual machine scale sets, you can update custom data via the [REST API](/rest/api/compute/virtualmachinescalesets/update), the [Azure CLI](/cli/azure/vmss#az_vmss_update), or [Azure PowerShell](/powershell/module/az.compute/update-azvmss). When you update custom data in the model for a virtual machine scale set:
+
+* Existing instances in the scale set won't get the updated custom data until they're reimaged.
+* Existing instances in the scale set that are upgraded won't get the updated custom data.
* New instances will receive the new custom data. ### Can I place sensitive values in custom data?
-We advise **not** to store sensitive data in custom data. For more information, see [Azure Security and encryption best practices](../security/fundamentals/data-encryption-best-practices.md).
-
+We advise *not* to store sensitive data in custom data. For more information, see [Azure data security and encryption best practices](../security/fundamentals/data-encryption-best-practices.md).
### Is custom data made available in IMDS?
-Custom data is not available in IMDS. We suggest using user data though IMDS instead. For more information, see [User data through Azure Instance Metadata Service](./linux/instance-metadata-service.md?tabs=linux#get-user-data)
+Custom data is not available in Azure Instance Metadata Service (IMDS). We suggest using user data in IMDS instead. For more information, see [User data through Azure Instance Metadata Service](./linux/instance-metadata-service.md?tabs=linux#get-user-data).
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-linux.md
If your script is on a local server, then you may still need additional firewall
* When the script is running, you will only see a 'transitioning' extension status from the Azure portal or CLI. If you want more frequent status updates of a running script, you will need to create your own solution. * Custom Script extension does not natively support proxy servers, however you can use a file transfer tool that supports proxy servers within your script, such as *Curl*. * Be aware of non default directory locations that your scripts or commands may rely on, have logic to handle this.
-* When deploying custom script to production VMSS instances it is suggested to deploy via json template and store your script storage account where you have control over the SAS token.
## Extension schema
az vm extension set \
--protected-settings ./protected-config.json ```
+## Virtual machine scale sets
+
+If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the shared access signature token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account shared access signature token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
+
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension?view=azure-cli-latest), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the shared access signature token for accessing the script in your storage account for as long as you need.
+ ## Troubleshooting When the Custom Script Extension runs, the script is created or downloaded into a directory that's similar to the following example. The command output is also saved into this directory in `stdout` and `stderr` files.
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-windows.md
If you are using [Invoke-WebRequest](/powershell/module/microsoft.powershell.uti
```error The response content cannot be parsed because the Internet Explorer engine is not available, or Internet Explorer's first-launch configuration is not complete. Specify the UseBasicParsing parameter and try again. ```
-## Virtual Machine Scale Sets
-To deploy the Custom Script Extension on a Scale Set, see [Add-AzVmssExtension](/powershell/module/az.compute/add-azvmssextension)
+## Virtual machine scale sets
+
+If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the shared access signature token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account shared access signature token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
+
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension?view=azure-cli-latest), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the shared access signature token for accessing the script in your storage account for as long as you need.
## Classic VMs
virtual-machines Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/features-windows.md
The following troubleshooting actions apply to all VM extensions:
- Look at the system logs. Check for other operations that might have interfered with the extension, such as a long-running installation of another application that required exclusive access to the package manager.
+- In a VM, if there is an existing extension with a failed provisioning state, any other new extension fails to install.
+ ### Common reasons for extension failures - Extensions have 20 minutes to run. (Exceptions are Custom Script, Chef, and DSC, which have 90 minutes.) If your deployment exceeds this time, it's marked as a timeout. The cause of this can be low-resource VMs, or other VM configurations or startup tasks are consuming large amounts of resources while the extension is trying to provision.
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generation-2.md
Azure now offers generation 2 support for the following selected VM series:
Generation 2 VMs support the following Marketplace images:
-* Windows Server 2019, 2016, 2012 R2, 2012
+* Windows Server 2022, 2019, 2016, 2012 R2, 2012
+* Windows 11 Pro, Windows 11 Enterprise
* Windows 10 Pro, Windows 10 Enterprise
-* SUSE Linux Enterprise Server 15 SP1
+* SUSE Linux Enterprise Server 15 SP3, SP2
* SUSE Linux Enterprise Server 12 SP4
-* Ubuntu Server 16.04, 18.04, 19.04, 19.10, 20.04
-* RHEL 8.2, 8.1, 8.0, 7.9, 7.7, 7.6, 7.5, 7.4, 7.0, 8.3
-* Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 8.2, 8.3
-* Oracle Linux 7.7, 7.7-CI, 7.8
+* Ubuntu Server 21.04 LTS, 20.04 LTS, 18.04 LTS, 16.04 LTS
+* RHEL 8.4, 8.3, 8.2, 8.1, 8.0, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0
+* Cent OS 8.4, 8.3, 8.2, 8.1, 8.0, 7.7, 7.6, 7.5, 7.4
+* Oracle Linux 8.4 LVM, 8.3 LVM, 8.2 LVM, 8.1, 7.9 LVM, 7.9, 7.8, 7.7
> [!NOTE] > Specific Virtual machine sizes like Mv2-Series, DC-series, ND A100 v4-series, NDv2-series, Msv2 and Mdsv2-series may only support a subset of these images - please look at the relevant virtual machine size documentation for complete details.
For more information, see [Trusted launch](trusted-launch.md).
## Creating a generation 2 VM
+### Azure Resource Manager Template
+To create a simple Windows Generation 2 VM, see [Create a Windows virtual machine from a Resource Manager template](https://docs.microsoft.com/azure/virtual-machines/windows/ps-template)
+To create a simple Linux Generation 2 VM, see [How to create a Linux virtual machine with Azure Resource Manager templates](https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-secured-vm-from-template)
+ ### Marketplace image In the Azure portal or Azure CLI, you can create generation 2 VMs from a Marketplace image that supports UEFI boot.
You can also create generation 2 VMs by using virtual machine scale sets. In the
Yes. For more information, see [Create a VM with accelerated networking](../virtual-network/create-vm-accelerated-networking-cli.md). * **Do generation 2 VMs support Secure Boot or vTPM in Azure?**
- Both vTPM and Secure Boot are features of trusted launch (preview) for generation 2 VMs. For more information, see [Trusted launch](trusted-launch.md).
+ Both vTPM and Secure Boot are features of trusted launch for generation 2 VMs. For more information, see [Trusted launch](trusted-launch.md).
* **Is VHDX supported on generation 2?** No, generation 2 VMs support only VHD.
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/run-command.md
The Run Command feature uses the virtual machine (VM) agent to run shell scripts
You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), or [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke) for Linux VMs.
-This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of improper network or administrative user configuration.
+This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of network or administrative user configuration.
## Restrictions
virtual-machines Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/marketplace-images.md
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
-SourceAddressPrefix * ` -SourcePortRange * ` -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
+ -DestinationPortRange 3389 -Access Deny
$nsg = New-AzNetworkSecurityGroup ` -ResourceGroupName $resourceGroup ` -Location $location `
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/configure.md
On InfiniBand (IB) enabled VMs, the appropriate drivers are required to enable R
- The CentOS-HPC version 7.9 VM image additionally comes pre-configured with the Nvidia GPU drivers. - The [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) in the Marketplace come pre-configured with the appropriate IB drivers and GPU drivers.
-These VM images (VMI) are based on the base CentOS and Ubuntu marketplace VM images. Scripts used in the creation of these VM images from their base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos).
+These VM images are based on the base CentOS and Ubuntu marketplace VM images. Scripts used in the creation of these VM images from their base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos).
On GPU enabled [N-series](../../sizes-gpu.md) VMs, the appropriate GPU drivers are additionally required. This can be available by the following methods: - Use the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and [CentOS-HPC VM image](#centos-hpc-vm-images) version 7.9 which come pre-configured with the Nvidia GPU drivers and GPU compute software stack (CUDA, NCCL).
The VM size support matrix for the GPU drivers in supported HPC VM images is as
- [N-series](../../sizes-gpu.md): NDv2, NDv4 VM sizes are supported with the Nvidia GPU drivers and GPU compute software stack (CUDA, NCCL). - The other 'NC' and 'ND' VM sizes in the [N-series](../../sizes-gpu.md) are supported with the Nvidia GPU drivers.
-Also note that all the above VM sizes support "Gen 2" VMs, though some older ones also support "Gen 1" VMs. "Gen 2" support is also indicated with a "01" at the end of the VMI URN or version.
+All of the VM sizes in the N-series support [Gen 2 VMs](../../generation-2.md), though some older ones also support Gen 1 VMs. Gen 2 support is also indicated with a "01" at the end of the image URN or version.
### CentOS-HPC VM images
virtual-machines Oracle Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
In regions where availability zones aren't supported, you may use availability s
#### Oracle Data Guard Far Sync
-Oracle Data Guard Far Sync provides zero data loss protection capability for Oracle Databases. This capability allows you to protect against data loss in if your database machine fails. Oracle Data Guard Far Sync needs to be installed on a separate VM. Far Sync is a lightweight Oracle instance that only has a control file, password file, spfile, and standby logs. There are no data files or rego log files.
+Oracle Data Guard Far Sync provides zero data loss protection capability for Oracle Databases. This capability allows you to protect against data loss in if your database machine fails. Oracle Data Guard Far Sync needs to be installed on a separate VM. Far Sync is a lightweight Oracle instance that only has a control file, password file, spfile, and standby logs. There are no data files or redo log files.
For zero data loss protection, there must be synchronous communication between your primary database and the Far Sync instance. The Far Sync instance receives redo from the primary in a synchronous manner and forwards it immediately to all the standby databases in an asynchronous manner. This setup also reduces the overhead on the primary database, because it only has to send the redo to the Far Sync instance rather than all the standby databases. If a Far Sync instance fails, Data Guard automatically uses asynchronous transport to the secondary database from the primary database to maintain near-zero data loss protection. For added resiliency, customers may deploy multiple Far Sync instances per each database instance (primary and secondaries).
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-configure-system.md
The database tier defines the infrastructure for the database tier, supported da
> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used together with ANF pinning| > | `database_no_avset` | Controls if the database virtual machines are deployed without availability sets | Optional | default is false | > | `database_no_ppg` | Controls if the database servers will not be placed in a proximity placement group | Optional | default is false |
+> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | default is true |
The Virtual Machine and the operating system image is defined using the following structure:
The application tier defines the infrastructure for the application tier, which
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | -| |
-> | `scs_high_availability` | Defines if the Central Services is highly available | Optional | See [High availability configuration ](automation-configure-system.md#high-availability-configuration) |
+> | `scs_high_availability` | Defines if the Central Services is highly available | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) |
> | `scs_instance_number` | The instance number of SCS | Optional | | > | `ers_instance_number` | The instance number of ERS | Optional | | > | `scs_server_count` | Defines the number of scs servers | Required | |
By default the SAP System deployment uses the credentials from the SAP Workload
> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks using customer provided keys | Optional | > | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional | > | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows the possible values are `None`, `Windows_Client` and `Windows_Server`. |
+> | `use_zonal_markers` | Specifies if zonal Virtual Machines will include a zonal identifier. 'xooscs_z1_00l###' vs 'xooscs00l###'| Default value is true. |
-## Azure NetApp Support
-
+## NFS Support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `use_ANF` | If specified, deploys the Azure NetApp Files Account and Capacity Pool | Optional |
-> | `anf_sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional |
-> | `anf_transport_volume_size` | Defines the size (in GB) for the 'saptransport' volume | Optional |
+> | `NFS_Provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
+> | `sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional |
+
+### Azure Files NFS Support
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | - | -- | -- |
+> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account for Azure Files | Optional |
## High availability configuration
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
The table below defines the parameters used for defining the Key Vault informati
> | `dns_label` | If specified, is the DNS name of the private DNS zone | Optional | > | `dns_resource_group_name` | The name of the resource group containing the Private DNS zone | Optional |
+## NFS Support
-## Azure NetApp Support
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | - | -- | -- |
+> | `NFS_Provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
+> | `sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional |
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | --| -- | |
-> | `use_ANF` | If specified, deploys the Azure NetApp Files Account and Capacity Pool | Optional | |
> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files Account | Optional | For existing environment deployments | > | `ANF_account_name` | Name for the Azure NetApp Files Account | Optional | | > | `ANF_service_level` | Service level for the Azure NetApp Files Capacity Pool | Optional | |
The table below defines the parameters used for defining the Key Vault informati
> | `anf_subnet_name` | The name of the ANF subnet | Optional | | > | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet | Required | For existing environment deployments | > | `anf_subnet_address_prefix` | The address range for the `ANF` subnet | Required | For new environment deployments |
+> | `transport_volume_size` | Defines the size (in GB) for the 'saptransport' volume | Optional |
## Other Parameters
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
You can copy the sample configuration files to start testing the deployment auto
cd C:\Azure_SAP_Automated_Deployment
-xcopy sap-automation\samples\WORKSPACES WORKSPACES
+xcopy /E sap-automation\samples\WORKSPACES WORKSPACES
```
virtual-machines Automation Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-deployment-framework.md
The [automation framework](https://github.com/Azure/sap-automation) has two main
You will use the control plane of the SAP deployment automation framework to deploy the SAP Infrastructure and the SAP application infrastructure. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
+> [!NOTE]
+> This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance.
+>
+> This automation framework also follows the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/).
+ The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment a single control plane is used to manage multiple SAP deployments. :::image type="content" source="./media/automation-deployment-framework/control-plane-sap-infrastructure.png" alt-text="Diagram showing the SAP deployment automation framework's dependency between the control plane and application plane.":::
The following diagram shows the key components of the control plane and workload
:::image type="content" source="./media/automation-deployment-framework/automation-diagram-full.png" alt-text="Diagram showing the SAP deployment automation framework environment.":::
-> [!NOTE]
-> This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance.
->
-> This automation framework also follows the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/).
-
-You will use the control plane of the SAP deployment automation framework to deploy the SAP Infrastructure and the SAP application infrastructure. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
- The application configuration will be performed from the Ansible Controller in the Control plane using a set of pre-defined playbooks. These playbooks will: - Configure base operating system settings
The application configuration will be performed from the Ansible Controller in t
## About the control plane
-The control plane houses the infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
+The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
The control plane provides the following services - Terraform Deployment Infrastructure
virtual-machines Automation Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-get-started.md
export ARM_SUBSCRIPTION_ID=<subscriptionID>
> [!NOTE] > Be sure to replace the sample value `<subscriptionID>` with your information.
-You can copy the sample configuration files to start testing the deployment automation framework.
-
-```bash
-cd ~/Azure_SAP_Automated_Deployment
-
-cp -R sap-automation/samples/WORKSPACES WORKSPACES
-
-```
-- # [Windows](#tab/windows) ```powershell
git clone https://github.com/Azure/sap-automation.git
Import the PowerShell module ```powershell
-Import-Module C:\Azure_SAP_Automated_Deployment\sap-automation\deploy\scripts\pwsh\SAPDeploymentUtilities\Output\SAPDeploymentUtilities\SAPDeploymentUtilitiespsd1
+Import-Module C:\Azure_SAP_Automated_Deployment\sap-automation\deploy\scripts\pwsh\SAPDeploymentUtilities\Output\SAPDeploymentUtilities\SAPDeploymentUtilities.psd1
```
cp -R sap-automation/samples/WORKSPACES WORKSPACES
cd C:\Azure_SAP_Automated_Deployment mkdir WORKSPACES
-xcopy sap-automation\samples\WORKSPACES WORKSPACES
+xcopy /E sap-automation\samples\WORKSPACES WORKSPACES
```
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-addresses.md
Public IP addresses are created with one of the following SKUs:
| Public IP address | Standard | Basic | | | | | | Allocation method| Static | For IPv4: Dynamic or Static; For IPv6: Dynamic.|
-| | Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes.|Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes.|
+| Idle Timeout | Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes.|Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes.|
| Security | Secure by default model and be closed to inbound traffic when used as a frontend. Allow traffic with [network security group](../../virtual-network/network-security-groups-overview.md#network-security-groups) (NSG) is required (for example, on the NIC of a virtual machine with a Standard SKU Public IP attached).| Open by default. Network security groups are recommended but optional for restricting inbound or outbound traffic.| | [Availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) | Supported. Standard IPs can be non-zonal, zonal, or zone-redundant. **Zone redundant IPs can only be created in [regions where 3 availability zones](../../availability-zones/az-region.md) are live.** IPs created before zones are live won't be zone redundant. | Not supported. | | [Routing preference](routing-preference-overview.md)| Supported to enable more granular control of how traffic is routed between Azure and the Internet. | Not supported.|
Public IPs have two types of assignments:
| Resource | Static | Dynamic | | | | |
-| Standard Public IPv4 | :white_check_mark: | x |
+| Standard public IPv4 | :white_check_mark: | x |
| Standard public IPv6 | :white_check_mark: | x | | Basic public IPv4 | :white_check_mark: | :white_check_mark: | | Basic public IPv6 | x | :white_check_mark: |
vpn-gateway Bgp How To Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/bgp-how-to-cli.md
Title: 'Configure BGP for VPN Gateway using CLI'
description: Learn how to configure BGP for VPN gateways using CLI. -+ Last updated 09/02/2020-+ # How to configure BGP on an Azure VPN gateway by using CLI
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/bgp-howto.md
Title: 'Configure BGP for VPN Gateway: Portal'
description: Learn how to configure BGP for Azure VPN Gateway. -+ Last updated 07/26/2021-+
vpn-gateway Ipsec Ike Policy Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/ipsec-ike-policy-howto.md
Title: 'IPsec/IKE policy for S2S VPN & VNet-to-VNet connections: Azure portal'
description: Learn how to configure IPsec/IKE policy for S2S or VNet-to-VNet connections with Azure VPN Gateways using the Azure portal. -+ Last updated 04/28/2021-+ # Configure IPsec/IKE policy for S2S VPN or VNet-to-VNet connections: Azure portal
vpn-gateway Nat Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/nat-howto.md
Title: 'Configure NAT on Azure VPN Gateway'
description: Learn how to configure NAT on Azure VPN Gateway. -+ Last updated 11/29/2021-+ # How to configure NAT on Azure VPN Gateways
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/nat-overview.md
Title: 'About NAT (Network Address Translation) on VPN Gateway'
description: Learn about NAT (Network Address Translation) in Azure VPN to connect networks with overlapping address spaces. -+ Last updated 12/02/2021-+ # About NAT on Azure VPN Gateway
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-tenant.md
Use the steps in [Add or delete users - Azure Active Directory](../active-direct
1. Enable Azure AD authentication on the VPN gateway by navigating to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section.
- * **Tenant:** TenantID for the Azure AD tenant ```https://login.microsoftonline.com/{AzureAD TenantID}/```
-
+ * **Tenant:** TenantID for the Azure AD tenant
+ * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
+ * Enter `https://login.microsoftonline.us/{AzureAD TenantID/` for Azure Government AD
+ * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID/` for Azure Germany AD
+ * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID/` for China 21Vianet AD
+
* **Audience:** Application ID of the "Azure VPN" Azure AD Enterprise App * Enter 41b23e61-6c1e-4545-b367-cd054e0ed4b4 for Azure Public
Use the steps in [Add or delete users - Azure Active Directory](../active-direct
* Enter 49f817b6-84ae-4cc0-928c-73f27289b3aa for Azure China 21Vianet
- * **Issuer**: URL of the Secure Token Service ```https://sts.windows.net/{AzureAD TenantID}/```
+ * **Issuer**: URL of the Secure Token Service `https://sts.windows.net/{AzureAD TenantID}/`
:::image type="content" source="./media/openvpn-create-azure-ad-tenant/azure-ad-auth-portal.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Azure Active Directory settings." border="false":::
vpn-gateway Vpn Gateway 3Rdparty Device Config Cisco Asa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-3rdparty-device-config-cisco-asa.md
Title: 'Sample configuration for connecting Cisco ASA devices to VPN gateways'
description: View sample configurations for connecting Cisco ASA devices to Azure VPN gateways. -+ Last updated 04/29/2021-+ # Sample configuration: Cisco ASA device (IKEv2/no BGP)
vpn-gateway Vpn Gateway 3Rdparty Device Config Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-3rdparty-device-config-overview.md
Title: 'Partner VPN device configurations for connecting to Azure VPN gateways' description: Learn about partner VPN device configurations for connecting to Azure VPN gateways. -+ Last updated 09/02/2020-+
vpn-gateway Vpn Gateway About Compliance Crypto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-compliance-crypto.md
Title: 'Cryptographic requirements for VPN gateways'
description: Learn how to configure Azure VPN gateways to satisfy cryptographic requirements for both cross-premises S2S VPN tunnels, and Azure VNet-to-VNet connections. -+ Last updated 12/02/2020-+ # About cryptographic requirements and Azure VPN gateways
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
Title: 'About VPN devices for connections'
description: Learn about VPN devices and IPsec parameters for Site-to-Site cross-premises connections. Links are provided to configuration instructions and samples. -+ Last updated 12/10/2021-+ # About VPN devices and IPsec/IKE parameters for Site-to-Site VPN Gateway connections
vpn-gateway Vpn Gateway Activeactive Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-activeactive-rm-powershell.md
Title: 'Configure active-active S2S VPN connections'
description: Learn how to configure active-active connections with VPN gateways using PowerShell. -+ Last updated 09/03/2020--++
vpn-gateway Vpn Gateway Bgp Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-bgp-overview.md
Title: 'About BGP with VPN Gateway'
description: Learn about Border Gateway Protocol (BGP) in Azure VPN, the standard internet protocol to exchange routing and reachability information between networks. -+ Last updated 09/02/2020-+ # About BGP with Azure VPN Gateway
vpn-gateway Vpn Gateway Bgp Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md
Title: 'Configure BGP for VPN Gateway using PowerShell'
description: Learn how to configure BGP for VPN gateways using PowerShell. -+ Last updated 09/02/2020-+
vpn-gateway Vpn Gateway Connect Multiple Policybased Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md
Title: 'Connect VPN gateways to multiple on-premises policy-based VPN devices'
description: Learn how to configure an Azure route-based VPN gateway to multiple policy-based VPN devices using PowerShell. -+ Last updated 09/02/2020-+ # Connect Azure VPN gateways to multiple on-premises policy-based VPN devices using PowerShell
vpn-gateway Vpn Gateway Download Vpndevicescript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-download-vpndevicescript.md
Title: 'Download VPN device configuration scripts for S2S VPN connections'
description: Learn how to download VPN device configuration scripts for S2S VPN connections with Azure VPN Gateways. -+ Last updated 09/02/2020-+
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-highlyavailable.md
Title: 'About Highly Available gateway configurations'
description: Learn about highly available configuration options using Azure VPN Gateways. -+ Last updated 05/27/2021-+ # Highly Available cross-premises and VNet-to-VNet connectivity
vpn-gateway Vpn Gateway Ipsecikepolicy Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md
Title: 'IPsec/IKE policy for S2S VPN & VNet-to-VNet connections: PowerShell'
description: Learn how to configure IPsec/IKE policy for S2S or VNet-to-VNet connections with Azure VPN Gateways using PowerShell. -+ Last updated 09/02/2020-+
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-multi-site.md
Title: 'Connect a VNet to multiple sites using VPN Gateway: Classic'
description: Learn how to connect multiple on-premises sites to a classic virtual network using a VPN gateway. -+ Last updated 09/03/2020-+ # Add a Site-to-Site connection to a VNet with an existing VPN gateway connection (classic)
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vs-azure-tools-storage-manage-with-storage-explorer.md
In this article, you'll learn several ways of connecting to and managing your Az
The following versions of Windows support Storage Explorer:
-* Windows 10 (recommended)
+* Windows 11
+* Windows 10
* Windows 8 * Windows 7
web-application-firewall Waf Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/waf-sentinel.md
To enable log analytics for each resource, go to your individual Azure Front Doo
4. On the Azure home page, type *Microsoft Sentinel* in the search bar and select the **Microsoft Sentinel** resource. 2. Select an already active workspace or create a new workspace. 3. On the left side panel under **Configuration** select **Data Connectors**.
-4. Search for **Microsoft web application firewall** and select **Microsoft web application firewall (WAF)**. Select **Open connector** page on the bottom right.
+4. Search for **Azure web application firewall** and select **Azure web application firewall (WAF)**. Select **Open connector** page on the bottom right.
:::image type="content" source="media//waf-sentinel/data-connectors.png" alt-text="Data connectors":::