Updates from: 07/07/2021 03:05:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/api-connectors-overview.md
Previously updated : 04/30/2021 Last updated : 07/05/2021
The output claims should look like the following xml snippet:
</OutputClaims> ```
-To parse a nested JSON Body response, set the ResolveJsonPathsInJsonTokens metadata to true. In the output claim, set the PartnerClaimType to the JSON path element you want to output.
+### Handling null values
+
+A null value in a database is used when the value in a column is unknown or missing. Do not include JSON keys with a `null` value. In the following example, the email returns `null` value:
+
+```json
+{
+ "name": "Emily Smith",
+ "email": null,
+ "loyaltyNumber": 1234
+}
+```
+
+When an element is null, either:
+
+- Omit the key-value pair from the JSON.
+- Return a value that corresponds to the Azure AD B2C claim data type. For example, for a `string` data type, return empty string `""`. For an `integer` data type, return a zero value `0`. For a `dateTime` data type, return a minimum value `1970-00-00T00:00:00.0000000Z`.
+
+The following example demonstrates how to handle a null value. The email is omitted from the JSON:
+
+```json
+{
+ "name": "Emily Smith",
+ "loyaltyNumber": 1234
+}
+```
+
+### Parse a nested JSON body
+
+To parse a nested JSON body response, set the ResolveJsonPathsInJsonTokens metadata to true. In the output claim, set the PartnerClaimType to the JSON path element you want to output.
```json "contacts": [
See the following articles for examples of using a RESTful technical profile:
- [Secure your REST API services](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)
active-directory-b2c Enable Authentication Android App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-android-app.md
Previously updated : 07/05/2021 Last updated : 07/06/2021
Your redirect URI and the `BrowserTabActivity` activity should look similar to t
The redirect URL for the sample Android:
-```kotlin
+```
msauth://com.azuresamples.msalandroidkotlinapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D ```
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/network-considerations.md
Previously updated : 12/16/2020 Last updated : 07/06/2021
A managed domain creates some networking resources during deployment. These reso
| Azure resource | Description | |:-|:| | Network interface card | Azure AD DS hosts the managed domain on two domain controllers (DCs) that run on Windows Server as Azure VMs. Each VM has a virtual network interface that connects to your virtual network subnet. |
-| Dynamic standard public IP address | Azure AD DS communicates with the synchronization and management service using a standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](../virtual-network/public-ip-addresses.md). |
-| Azure standard load balancer | Azure AD DS uses a standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md) |
-| Network address translation (NAT) rules | Azure AD DS creates and uses three NAT rules on the load balancer - one rule for secure HTTP traffic, and two rules for secure PowerShell remoting. |
+| Dynamic standard public IP address | Azure AD DS communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](../virtual-network/public-ip-addresses.md). |
+| Azure standard load balancer | Azure AD DS uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md) |
+| Network address translation (NAT) rules | Azure AD DS creates and uses two Inbound NAT rules on the load balancer for secure PowerShell remoting. If a Standard SKU load balancer is used, it will have an Outbound NAT Rule too. For the Basic SKU load balancer, no Outbound NAT rule is required. |
| Load balancer rules | When a managed domain is configured for secure LDAP on TCP port 636, three rules are created and used on a load balancer to distribute the traffic. | > [!WARNING]
If needed, you can [create the required network security group and rules using A
* Access is only allowed with business justification, such as for management or troubleshooting scenarios. * This rule can be set to *Deny*, and only set to *Allow* when required. Most management and monitoring tasks are performed using PowerShell remoting. RDP is only used in the rare event that Microsoft needs to connect remotely to your managed domain for advanced troubleshooting.
-> [!NOTE]
-> You can't manually select the *CorpNetSaw* service tag from the portal if you try to edit this network security group rule. You must use Azure PowerShell or the Azure CLI to manually configure a rule that uses the *CorpNetSaw* service tag.
->
-> For example, you can use the following script to create a rule allowing RDP:
->
-> `Get-AzureRmNetworkSecurityGroup -Name "nsg-name" -ResourceGroupName "resource-group-name" | Add-AzureRmNetworkSecurityRuleConfig -Name "new-rule-name" -Access "Allow" -Protocol "TCP" -Direction "Inbound" -Priority "priority-number" -SourceAddressPrefix "CorpNetSaw" -SourcePortRange "" -DestinationPortRange "3389" -DestinationAddressPrefix "" | Set-AzureRmNetworkSecurityGroup`
+
+You can't manually select the *CorpNetSaw* service tag from the portal if you try to edit this network security group rule. You must use Azure PowerShell or the Azure CLI to manually configure a rule that uses the *CorpNetSaw* service tag.
+
+For example, you can use the following script to create a rule allowing RDP:
+
+```powershell
+Get-AzNetworkSecurityGroup -Name "nsg-name" -ResourceGroupName "resource-group-name" | Add-AzNetworkSecurityRuleConfig -Name "new-rule-name" -Access "Allow" -Protocol "TCP" -Direction "Inbound" -Priority "priority-number" -SourceAddressPrefix "CorpNetSaw" -SourcePortRange "*" -DestinationPortRange "3389" -DestinationAddressPrefix "*" | Set-AzNetworkSecurityGroup
+```
## User-defined routes
active-directory Concept Mfa Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-plan.md
+
+ Title: Plan an Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) deployment
+description: Learn how to plan and implement an Azure AD MFA roll-out.
+++++ Last updated : 07/01/2021++++++++
+# Plan an Azure Active Directory Multi-Factor Authentication deployment
+
+Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. Organizations can enable multifactor authentication with [Conditional Access](../conditional-access/overview.md) to make the solution fit their specific needs.
+
+This deployment guide shows you how to plan and implement an [Azure AD MFA](concept-mfa-howitworks.md) roll-out.
+
+## Prerequisites for deploying Azure AD MFA
+
+Before you begin your deployment, ensure you meet the following prerequisites for your relevant scenarios.
+
+| Scenario | Prerequisite |
+|-|--|
+|**Cloud-only** identity environment with modern authentication | **No prerequisite tasks** |
+|**Hybrid identity** scenarios | Deploy [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) and synchronize user identities between the on-premises Active Directory Domain Services (AD DS) and Azure AD. |
+| **On-premises legacy applications** published for cloud access| Deploy [Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) |
+
+## Choose authentication methods for MFA
+
+There are many methods that can be used for a second-factor authentication. You can choose from the list of available authentication methods, evaluating each in terms of security, usability, and availability.
+
+>[!IMPORTANT]
+>Enable more than one MFA method so that users have a backup method available in case their primary method is unavailable.
+Methods include:
+
+- [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)
+- [Microsoft Authenticator app](concept-authentication-authenticator-app.md)
+- [FIDO2 security key (preview)](concept-authentication-passwordless.md#fido2-security-keys)
+- [OATH hardware tokens (preview)](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview)
+- [OATH software tokens](concept-authentication-oath-tokens.md#oath-software-tokens)
+- [SMS verification](concept-authentication-phone-options.md#mobile-phone-verification)
+- [Voice call verification](concept-authentication-phone-options.md)
+
+When choosing authenticating methods that will be used in your tenant consider the security and usability of these methods:
+
+![Choose the right authentication method](media/concept-authentication-methods/authentication-methods.png)
+
+To learn more about the strength and security of these methods and how they work, see the following resources:
+
+- [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+- [Video: Choose the right authentication methods to keep your organization safe](https://youtu.be/LB2yj4HSptc)
+
+You can use this [PowerShell script](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/) to analyze usersΓÇÖ MFA configurations and suggest the appropriate MFA authentication method.
+
+For the best flexibility and usability, use the Microsoft Authenticator app. This authentication method provides the best user experience and multiple modes, such as passwordless, MFA push notifications, and OATH codes. The Microsoft Authenticator app also meets the National Institute of Standards and Technology (NIST) [Authenticator Assurance Level 2 requirements](../standards/nist-authenticator-assurance-level-2.md).
+
+You can control the authentication methods available in your tenant. For example, you may want to block some of the least secure methods, such as SMS.
+
+| Authentication method | Manage from | Scoping |
+|--|-||
+| Microsoft Authenticator (Push notification and passwordless phone sign-in) | MFA settings or
+Authentication methods policy | Authenticator passwordless phone sign-in can be scoped to users and groups |
+| FIDO2 security key | Authentication methods policy | Can be scoped to users and groups |
+| Software or Hardware OATH tokens | MFA settings | |
+| SMS verification | MFA settings | Manage SMS sign-in for primary authentication in authentication policy. SMS sign-in can be scoped to users and groups. |
+| Voice calls | Authentication methods policy | |
++
+## Plan Conditional Access policies
+
+Azure AD MFA is enforced with Conditional Access policies. These policies allow you to prompt users for multifactor authentication when needed for security and stay out of usersΓÇÖ way when not needed.
+
+![Conceptual Conditional Access process flow](media/concept-mfa-plan/conditional-access-overview-how-it-works.png)
+
+In the Azure portal, you configure Conditional Access policies under **Azure Active Directory** > **Security** > **Conditional Access**.
+
+To learn more about creating Conditional Access policies, see [Conditional Access policy to prompt for Azure AD MFA when a user signs in to the Azure portal](tutorial-enable-azure-mfa.md). This helps you to:
+
+- Become familiar with the user interface
+- Get a first impression of how Conditional Access works
+
+For end-to-end guidance on Azure AD Conditional Access deployment, see the [Conditional Access deployment plan](../conditional-access/plan-conditional-access.md).
+
+### Common policies for Azure AD MFA
+
+Common use cases to require Azure AD MFA include:
+
+- For [administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md)
+- To [specific applications](tutorial-enable-azure-mfa.md)
+- For [all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)
+- For [Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md)
+- From [network locations you don't trust](../conditional-access/untrusted-networks.md)
+
+### Named locations
+
+To manage your Conditional Access policies, the location condition of a Conditional Access policy enables you to tie access controls settings to the network locations of your users. We recommend to use [Named Locations](../conditional-access/location-condition.md) so that you can create logical groupings of IP address ranges or countries and regions. This creates a policy for all apps that blocks sign in from that named location. Be sure to exempt your administrators from this policy.
+
+### Risk-based policies
+
+If your organization uses [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) to detect risk signals, consider using [risk-based policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) instead of named locations. Policies can be created to force password changes when there is a threat of compromised identity or require multifactor authentication when a sign-in is deemed [risky by events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) such as leaked credentials, sign-ins from anonymous IP addresses, and more.
+
+Risk policies include:
+
+- [Require all users to register for Azure AD MFA](../identity-protection/howto-identity-protection-configure-mfa-policy.md)
+- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
+- [Require MFA for users with medium or high sign-in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
+
+## Plan user session lifetime
+
+When planning your MFA deployment, itΓÇÖs important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
+Azure AD has multiple settings that determine how often you need to reauthenticate. Understand the needs of your business and users and configure settings that provide the best balance for your environment.
+
+We recommend using devices with Primary Refresh Tokens (PRT) for improved end user experience and reduce the session lifetime with sign-in frequency policy only on specific business use cases.
+
+For more information, see [Optimize reauthentication prompts and understand session lifetime for Azure AD MFA](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+
+## Plan user registration
+
+A major step in every MFA deployment is getting users registered to use MFA. Authentication methods such as Voice and SMS allow pre-registration, while others like the Authenticator App require user interaction. Administrators must determine how users will register their methods.
+
+### Combined registration for SSPR and Azure AD MFA
+We recommend using the [combined registration experience](howto-registration-mfa-sspr-combined.md) for Azure AD MFA and [Azure AD self-service password reset (SSPR)](concept-sspr-howitworks.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD MFA. Combined registration is a single step for end users.
+
+### Registration with Identity Protection
+Azure AD Identity Protection contributes both a registration policy for and automated risk detection and remediation policies to the Azure AD MFA story. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign-in is deemed risky.
+If you use Azure AD Identity Protection, [configure the Azure AD MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) to prompt your users to register the next time they sign in interactively.
+
+### Registration without Identity Protection
+If you donΓÇÖt have licenses that enable Azure AD Identity Protection, users are prompted to register the next time that MFA is required at sign-in.
+To require users to use MFA, you can use Conditional Access policies and target frequently used applications like HR systems.
+If a userΓÇÖs password is compromised, it could be used to register for MFA, taking control of their account. We therefore recommend [securing the security registration process with conditional access policies](../conditional-access/howto-conditional-access-policy-registration.md) requiring trusted devices and locations.
+You can further secure the process by also requiring a [Temporary Access Pass](howto-authentication-temporary-access-pass.md). A time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones.
+
+### Increase the security of registered users
+If you have users registered for MFA using SMS or voice calls, you may want to move them to more secure methods such as the Microsoft Authenticator app. Microsoft now offers a public preview of functionality that allows you to prompt users to set up the Microsoft Authenticator app during sign-in. You can set these prompts by group, controlling who is prompted, enabling targeted campaigns to move users to the more secure method.
+
+### Plan recovery scenarios
+As mentioned before, ensure users are registered for more than one MFA method, so that if one is unavailable, they have a backup.
+If the user does not have a backup method available, you can:
+
+- Provide them a Temporary Access Pass so that they can manage their own authentication methods. You can also provide a Temporary Access Pass to enable temporary access to resources.
+- Update their methods as an administrator. To do so, select the user in the Azure portal, then select Authentication methods and update their methods.
+User communications
+
+ItΓÇÖs critical to inform users about upcoming changes, Azure AD MFA registration requirements, and any necessary user actions.
+We provide [communication templates](https://aka.ms/mfatemplates) and [end-user documentation](../user-help/security-info-setup-signin.md) to help draft your communications. Send users to [https://myprofile.microsoft.com](https://myprofile.microsoft.com/) to register by selecting the **Security Info** link on that page.
+
+## Plan integration with on-premises systems
+
+Applications that authenticate directly with Azure AD and have modern authentication (WS-Fed, SAML, OAuth, OpenID Connect) can make use of Conditional Access policies.
+Some legacy and on-premises applications do not authenticate directly against Azure AD and require additional steps to use Azure AD MFA. You can integrate them by using Azure AD Application proxy or [Network policy services](/windows-server/networking/core-network-guide/core-network-guide#BKMK_optionalfeatures).
+
+### Integrate with AD FS resources
+
+We recommend migrating applications secured with Active Directory Federation Services (AD FS) to Azure AD. However, if you are not ready to migrate these to Azure AD, you can use the Azure MFA adapter with AD FS 2016 or newer.
+If your organization is federated with Azure AD, you can [configure Azure AD MFA as an authentication provider with AD FS resources](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) both on-premises and in the cloud.
+
+### RADIUS clients and Azure AD MFA
+
+For applications that are using RADIUS authentication, we recommend moving client applications to modern protocols such as SAML, Open ID Connect, or OAuth on Azure AD. If the application cannot be updated, then you can deploy [Network Policy Server (NPS) with the Azure MFA extension](howto-mfa-nps-extension.md). The network policy server (NPS) extension acts as an adapter between RADIUS-based applications and Azure AD MFA to provide a second factor of authentication.
+
+#### Common integrations
+
+Many vendors now support SAML authentication for their applications. When possible, we recommend federating these applications with Azure AD and enforcing MFA through Conditional Access. If your vendor doesnΓÇÖt support modern authentication ΓÇô you can use the NPS extension.
+Common RADIUS client integrations include applications such as [Remote Desktop Gateways](howto-mfa-nps-extension-rdg.md) and [VPN servers](howto-mfa-nps-extension-vpn.md).
+
+Others might include:
+
+- Citrix Gateway
+
+ [Citrix Gateway](https://docs.citrix.com/advanced-concepts/implementation-guides/citrix-gateway-microsoft-azure.html#microsoft-azure-mfa-deployment-methods) supports both RADIUS and NPS extension integration, and a SAML integration.
+
+- Cisco VPN
+ - The Cisco VPN supports both RADIUS and [SAML authentication for SSO](../saas-apps/cisco-anyconnect.md).
+ - By moving from RADIUS authentication to SAML, you can integrate the Cisco VPN without deploying the NPS extension.
+
+- All VPNs
+
+## Deploy Azure AD MFA
+
+Your MFA rollout plan should include a pilot deployment followed by deployment waves that are within your support capacity. Begin your rollout by applying your Conditional Access policies to a small group of pilot users. After evaluating the effect on the pilot users, process used, and registration behaviors, you can either add more groups to the policy or add more users to the existing groups.
+
+Follow the steps below:
+
+1. Meet the necessary prerequisites
+1. Configure chosen authentication methods
+1. Configure your Conditional Access policies
+1. Configure session lifetime settings
+1. Configure Azure AD MFA registration policies
+
+## Manage Azure AD MFA
+This section provides reporting and troubleshooting information for Azure AD MFA.
+
+### Reporting and Monitoring
+
+Azure AD has reports that provide technical and business insights, follow the progress of your deployment and check if your users are successful at sign-in with MFA. Have your business and technical application owners assume ownership of and consume these reports based on your organizationΓÇÖs requirements.
+
+You can monitor authentication method registration and usage across your organization using the [Authentication Methods Activity dashboard](howto-authentication-methods-activity.md). This helps you understand what methods are being registered and how they're being used.
+
+#### Sign-in report to review MFA events
+
+The Azure AD sign-in reports include authentication details for events when a user is prompted for multi-factor authentication, and if any Conditional Access policies were in use. You can also use PowerShell for reporting on users registered for MFA.
+
+NPS extension and AD FS logs can be viewed from **Security** > **MFA** > **Activity report**.
+
+For more information, and additional MFA reports, see [Review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md#view-the-azure-ad-sign-ins-report).
+
+### Troubleshoot Azure AD MFA
+See [Troubleshooting Azure AD MFA](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues.
+
+## Next steps
+
+[Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
+
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Currently, a device can only be registered in a single tenant. This limit means
To learn about Azure AD authentication and passwordless methods, see the following articles: - [Learn how passwordless authentication works](concept-authentication-passwordless.md)-- [Learn about device registration](../devices/overview.md#getting-devices-in-azure-ad)
+- [Learn about device registration](../devices/overview.md)
- [Learn about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Here's what you need to know about email as an alternate login ID:
* The feature is available in Azure AD Free edition and higher. * The feature enables sign-in with verified domain *ProxyAddresses* for cloud-authenticated Azure AD users.
-* When a user signs in with a non-UPN email, the `unique_name` and `preferred_username` claims (if present) in the [ID token](../develop/id-tokens.md) will have the value of the non-UPN email.
+* When a user signs in with a non-UPN email, the `unique_name` and `preferred_username` claims (if present) in the [ID token](../develop/id-tokens.md) will return the non-UPN email.
+* The feature supports managed authentication with Password Hash Sync (PHS) or Pass-Through Authentication (PTA).
* There are two options for configuring the feature: * [Home Realm Discovery (HRD) policy](#enable-user-sign-in-with-an-email-address) - Use this option to enable the feature for the entire tenant. Global administrator privileges required. * [Staged rollout policy](#enable-staged-rollout-to-test-user-sign-in-with-an-email-address) - Use this option to test the feature with specific Azure AD groups. Global administrator privileges required.
Here's what you need to know about email as an alternate login ID:
In the current preview state, the following limitations apply to email as an alternate login ID:
-* Users may see their UPN, even when they signed-in with their non-UPN email. The following example behavior may be seen:
+* **User experience -** Users may see their UPN, even when they signed-in with their non-UPN email. The following example behavior may be seen:
* User is prompted to sign in with UPN when directed to Azure AD sign-in with `login_hint=<non-UPN email>`. * When a user signs-in with a non-UPN email and enters an incorrect password, the *"Enter your password"* page changes to display the UPN.
- * On some Microsoft sites and apps, such as Microsoft Office, the **Account Manager** control typically displayed in the upper right may display the user's UPN instead of the non-UPN email used to sign in.
+ * On some Microsoft sites and apps, such as Microsoft Office, the *Account Manager* control typically displayed in the upper right may display the user's UPN instead of the non-UPN email used to sign in.
-* Some flows are currently not compatible with non-UPN emails, such as the following:
+* **Unsupported flows -** Some flows are currently not compatible with non-UPN emails, such as the following:
* Identity Protection doesn't match non-UPN emails with *Leaked Credentials* risk detection. This risk detection uses the UPN to match credentials that have been leaked. For more information, see [Azure AD Identity Protection risk detection and remediation][identity-protection]. * B2B invites sent to a non-UPN email are not fully supported. After accepting an invite sent to a non-UPN email, sign-in with the non-UPN email may not work for the guest user on the resource tenant endpoint. * When a user is signed-in with a non-UPN email, they cannot change their password. Azure AD self-service password reset (SSPR) should work as expected. During SSPR, the user may see their UPN if they verify their identity via alternate email.
-* The following scenarios are not supported. Sign-in with non-UPN email to:
+* **Unsupported scenarios -** The following scenarios are not supported. Sign-in with non-UPN email for:
* Hybrid Azure AD joined devices * Azure AD joined devices
+ * Azure AD registered devices
+ * Seamless SSO
+ * Applications using Resource Owner Password Credentials (ROPC)
+ * Applications using legacy authentication such as POP3 and SMTP
* Skype for Business * Microsoft Office on macOS
- * OneDrive (when the sign-in flow does not involve Multi-Factor Authentication)
* Microsoft Teams on web
- * Resource Owner Password Credentials (ROPC) flows
+ * OneDrive, when the sign-in flow does not involve Multi-Factor Authentication
-* Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs.
-* Staged rollout policy does not work as expected for users that are included in multiple staged rollout policies.
-* Within a tenant, a cloud-only user's UPN can be the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. More on this issue in the [Troubleshoot](#troubleshoot) section.
+* **Unsupported apps -** Some third-party applications may not work as expected if they assume that the `unique_name` or `preferred_username` claims are immutable or will always match a specific user attribute (e.g. UPN).
-## Overview of alternate login ID options
+* **Logging -** Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs. In addition, the *Sign-in identifier type* field in the sign-in logs may not be always accurate and should not be used to determine whether the feature has been used for sign-in.
+
+* **Staged rollout policy -** The following limitations apply only when the feature is enabled using staged rollout policy:
+ * The feature does not work as expected for users that are included in other staged rollout policies.
+ * Staged rollout policy supports a maximum of 10 groups per feature.
+ * Staged rollout policy does not support nested groups.
+ * Staged rollout policy does not support dynamic groups.
+ * Contact objects inside the group will block the group from being added to a staged rollout policy.
+* **Duplicate values -** Within a tenant, a cloud-only user's UPN can be the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. More on this issue in the [Troubleshoot](#troubleshoot) section.
+
+## Overview of alternate login ID options
To sign in to Azure AD, users enter a value that uniquely identifies their account. Historically, you could only use the Azure AD UPN as the sign-in identifier. For organizations where the on-premises UPN is the user's preferred sign-in email, this approach was great. Those organizations would set the Azure AD UPN to the exact same value as the on-premises UPN, and users would have a consistent sign-in experience.
A different approach is to synchronize the Azure AD and on-premises UPNs to the
Traditional Active Directory Domain Services (AD DS) or Active Directory Federation Services (AD FS) authentication happens directly on your network and is handled by your AD DS infrastructure. With hybrid authentication, users can instead sign in directly to Azure AD.
-To support this hybrid authentication approach, you synchronize your on-premises AD DS environment to Azure AD using [Azure AD Connect][azure-ad-connect] and configure it to use Password Hash Sync (PHS) or Pass-Through Authentication (PTA). For more information, see [Choose the right authentication method for your Azure AD hybrid identity solution][hybrid-auth-methods].
+To support this hybrid authentication approach, you synchronize your on-premises AD DS environment to Azure AD using [Azure AD Connect][azure-ad-connect] and configure it to use PHS or PTA. For more information, see [Choose the right authentication method for your Azure AD hybrid identity solution][hybrid-auth-methods].
In both configuration options, the user submits their username and password to Azure AD, which validates the credentials and issues a ticket. When users sign in to Azure AD, it removes the need for your organization to host and manage an AD FS infrastructure.
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/terms-of-use.md
Previously updated : 01/27/2020 Last updated : 07/06/2021
You can edit some details of terms of use policies, but you can't modify an exis
![Edit terms of use pane showing name and expand options](./media/terms-of-use/edit-terms-use.png) 5. In the pane on the right, upload the pdf for the new version
-6. There is also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they will be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who have not consented before or whose consent expires will see the new version.
+6. There is also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they will be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who have not consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept delete and recreate or create a new TOU for this case.
![Edit terms of use re-accept option highlighted](./media/terms-of-use/re-accept.png)
A: The user counts in the terms of use report and who accepted/declined are stor
**Q: Why do I see a different number of consents in the terms of use report vs. the Azure AD audit logs?**<br /> A: The terms of use report is stored for the lifetime of that terms of use policy, while the Azure AD audit logs are stored for 30 days. Also, the terms of use report only displays the users current consent state. For example, if a user declines and then accepts, the terms of use report will only show that user's accept. If you need to see the history, you can use the Azure AD audit logs.
-**Q: If I edit the details for a terms of use policy, does it require users to accept again?**<br />
-A: No, if an administrator edits the details for a terms of use policy (name, display name, require users to expand, or add a language), it does not require users to reaccept the new terms.
-
-**Q: Can I update an existing terms of use policy document?**<br />
-A: Currently, you can't update an existing terms of use policy document. To change a terms of use policy document, you will have to create a new terms of use policy instance.
- **Q: If hyperlinks are in the terms of use policy PDF document, will end users be able to click them?**<br /> A: Yes, end users are able to select hyperlinks to additional pages but links to sections within the document are not supported. Also, hyperlinks in terms of use policy PDFs do not work when accessed from the Azure AD MyApps/MyAccount portal.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/access-tokens.md
Previously updated : 04/02/2021 Last updated : 06/25/2021 -+
Some claims are used to help Azure AD secure tokens in case of reuse. These are
|Claim | Format | Description | |--|--|-| | `typ` | String - always "JWT" | Indicates that the token is a JWT.|
-| `nonce` | String | A unique identifier used to protect against token replay attacks. Your resource can record this value to protect against replays. |
| `alg` | String | Indicates the algorithm that was used to sign the token, for example, "RS256" | | `kid` | String | Specifies the thumbprint for the public key that's used to sign this token. Emitted in both v1.0 and v2.0 access tokens. | | `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` is a legacy claim emitted only in v1.0 access tokens for compatibility purposes. |
Some claims are used to help Azure AD secure tokens in case of reuse. These are
| Claim | Format | Description | |--|--|-|
-| `aud` | String, an App ID URI or GUID | Identifies the intended recipient of the token - its audience. Your API should validate this value and reject the token if the value doesn't match. In v2.0 tokens, this is always the client ID of the API, while in v1.0 tokens it can be the client ID or the resource URI used in the request, depending on how the client requested the token.|
+| `aud` | String, an App ID URI or GUID | Identifies the intended recipient of the token - its audience. Your API must validate this value and reject the token if the value doesn't match. In v2.0 tokens, this is always the client ID of the API, while in v1.0 tokens it can be the client ID or the resource URI used in the request, depending on how the client requested the token.|
| `iss` | String, an STS URI | Identifies the security token service (STS) that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token issued is a v2.0 token (see the `ver` claim), the URI will end in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. Your app can use the GUID portion of the claim to restrict the set of tenants that can sign in to the app, if applicable. | |`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account not in the same tenant as the issuer - guests, for instance. If the claim isn't present, it means that the value of `iss` can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | | `iat` | int, a UNIX timestamp | "Issued At" indicates when the authentication for this token occurred. |
Some claims are used to help Azure AD secure tokens in case of reuse. These are
| `groups:src1` | JSON object | For token requests that are not length limited (see `hasgroups` above) but still too large for the token, a link to the full groups list for the user will be included. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | | `sub` | String | The principal about which the token asserts information, such as the user of an app. This value is immutable and cannot be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource, and can be used as a key in database tables. Because the subject is always present in the tokens that Azure AD issues, we recommend using this value in a general-purpose authorization system. The subject is, however, a pairwise identifier - it is unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be desired depending on your architecture and privacy requirements. See also the `oid` claim (which does remain the same across apps within a tenant). | | `oid` | String, a GUID | The immutable identifier for the "principal" of the request - the user or service principal whose identity has been verified. In ID tokens and app+user tokens, this is the object ID of the user. In app-only tokens, this is the object id of the calling service principal. It can also be used to perform authorization checks safely and as a key in database tables. This ID uniquely identifies the principal across applications - two different applications signing in the same user will receive the same value in the `oid` claim. Thus, `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph will return this ID as the `id` property for a given [user account](/graph/api/resources/user). Because the `oid` allows multiple apps to correlate principals, the `profile` scope is required in order to receive this claim for users. Note that if a single user exists in multiple tenants, the user will contain a different object ID in each tenant - they are considered different accounts, even though the user logs into each account with the same credentials. |
-| `tid` | String, a GUID | Represents the Azure AD tenant that the user is from. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user belongs to. For personal accounts, the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. The `profile` scope is required in order to receive this claim. |
+|`tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, your app must request the `profile` scope. |
| `unique_name` | String | Only present in v1.0 tokens. Provides a human readable value that identifies the subject of the token. This value is not guaranteed to be unique within a tenant and should be used only for display purposes. | | `uti` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | | `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources should not use this claim. |
If your app has custom signing keys as a result of using the [claims-mapping](ac
Your application's business logic will dictate this step, some common authorization methods are laid out below.
-* Check the `scp` or `roles` claim to verify that all present scopes match those exposed by your API, and allow the client to do the requested action.
+* Use the `aud` claim to ensure that the user intended to call your application. If your resource's identifier is not in the `aud` claim, reject it.
+* Use the `scp` claim to validate that the user has granted the calling app permission to call your API.
+* Use the `roles` and `wids` claims to validate that the user themselves has authorization to call your API. For example, an admin may have permission to write to your API, but not a normal user.
* Ensure the calling client is allowed to call your API using the `appid` claim.
-* Validate the authentication status of the calling client using `appidacr` - it shouldn't be 0 if public clients aren't allowed to call your API.
-* Check against a list of past `nonce` claims to verify the token isn't being replayed.
* Check that the `tid` matches a tenant that is allowed to call your API. * Use the `amr` claim to verify the user has performed MFA. This should be enforced using [Conditional Access](../conditional-access/overview.md). * If you've requested the `roles` or `groups` claims in the access token, verify that the user is in the group allowed to do this action.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/id-tokens.md
Previously updated : 04/02/2021 Last updated : 06/25/2021
The table below shows the claims that are in most ID tokens by default (except w
|`roles`| Array of strings | The set of roles that were assigned to the user who is logging in. | |`rh` | Opaque String |An internal claim used by Azure to revalidate tokens. Should be ignored. | |`sub` | String | The principal about which the token asserts information, such as the user of an app. This value is immutable and cannot be reassigned or reused. The subject is a pairwise identifier - it is unique to a particular application ID. If a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be wanted depending on your architecture and privacy requirements. |
-|`tid` | String, a GUID | A GUID that represents the Azure AD tenant that the user is from. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user belongs to. For personal accounts, the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. The `profile` scope is required to receive this claim. |
+|`tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, your app must request the `profile` scope. |
|`unique_name` | String | Provides a human readable value that identifies the subject of the token. This value is unique at any given point in time, but as emails and other identifiers can be reused, this value can reappear on other accounts. As such, the value should be used only for display purposes. Only issued in v1.0 `id_tokens`. | |`uti` | Opaque String | An internal claim used by Azure to revalidate tokens. Should be ignored. | |`ver` | String, either 1.0 or 2.0 | Indicates the version of the id_token. |
active-directory Msal Js Known Issues Ie Edge Browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-known-issues-ie-edge-browsers.md
Title: Issues on Internet Explorer & Microsoft Edge (MSAL.js) | Azure
description: Learn about know issues when using the Microsoft Authentication Library for JavaScript (MSAL.js) with Internet Explorer and Microsoft Edge browsers. -+
Last updated 05/18/2020-+ #Customer intent: As an application developer, I want to learn about issues with MSAL.js library so I can decide if this platform meets my application development needs and requirements.
The cause for most of these issues is as follows. The session storage and local
`Error :login_required; Error description:AADSTS50058: A silent sign-in request was sent but no user is signed in. The cookies used to represent the user's session were not sent in the request to Azure AD. This can happen if the user is using Internet Explorer or Edge, and the web app sending the silent sign-in request is in different IE security zone than the Azure AD endpoint (login.microsoftonline.com)` -- **Popup window doesn't close or is stuck when using login through Popup to authenticate**. When authenticating through popup window in Microsoft Edge or IE(InPrivate), after entering credentials and signing in, if multiple domains across security zones are involved in the navigation, the popup window doesn't close because MSAL.js loses the handle to the popup window.
+- **Popup window doesn't close or is stuck when using login through Popup to authenticate**. When authenticating through popup window in Microsoft Edge or IE(InPrivate), after entering credentials and signing in, if multiple domains across security zones are involved in the navigation, the popup window doesn't close because MSAL.js loses the handle to the popup window.
### Update: Fix available in MSAL.js 0.2.3 Fixes for the authentication redirect loop issues have been released in [MSAL.js 0.2.3](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases). Enable the flag `storeAuthStateInCookie` in the MSAL.js config to take advantage of this fix. By default this flag is set to false.
When the `storeAuthStateInCookie` flag is enabled, MSAL.js will use the browser
Use workarounds below. #### Other workarounds
-Make sure to test that your issue is occurring only on the specific version of Microsoft Edge browser and works on the other browsers before adopting these workarounds.
+Make sure to test that your issue is occurring only on the specific version of Microsoft Edge browser and works on the other browsers before adopting these workarounds.
1. As a first step to get around these issues, ensure that the application domain and any other sites involved in the redirects of the authentication flow are added as trusted sites in the security settings of the browser, so that they belong to the same security zone. To do so, follow these steps: - Open **Internet Explorer** and click on the **settings** (gear icon) in the top-right corner
active-directory Msal Js Prompt Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-prompt-behavior.md
Title: Interactive request prompt behavior (MSAL.js) | Azure
description: Learn to customize prompt behavior in interactive calls using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 04/24/2019-+ #Customer intent: As an application developer, I want to learn about customizing the UI prompt behaviors in MSAL.js library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-sso.md
Title: Single sign-on (MSAL.js) | Azure
description: Learn about building single sign-on experiences using the Microsoft Authentication Library for JavaScript (MSAL.js). -+
Last updated 04/24/2019-+ #Customer intent: As an application developer, I want to learn about enabling single sign on experiences with MSAL.js library so I can decide if this platform meets my application development needs and requirements.
const myMSALObj = new UserAgentApplication(config);
## SSO between apps
-When a user authenticates, a session cookie is set on the Azure AD domain in the browser. MSAL.js relies on this session cookie to provide SSO for the user between different applications. MSAL.js also caches the ID tokens and access tokens of the user in the browser storage per application domain. As a result, the SSO behavior varies for different cases:
+When a user authenticates, a session cookie is set on the Azure AD domain in the browser. MSAL.js relies on this session cookie to provide SSO for the user between different applications. MSAL.js also caches the ID tokens and access tokens of the user in the browser storage per application domain. As a result, the SSO behavior varies for different cases:
### Applications on the same domain
var request = {
userAgentApplication.acquireTokenSilent(request).then(function(response) { const token = response.accessToken }
-).catch(function (error) {
+).catch(function (error) {
//handle error }); ```
var request = {
userAgentApplication.acquireTokenSilent(request).then(function(response) { const token = response.accessToken }
-).catch(function (error) {
+).catch(function (error) {
//handle error }); ``` ## SSO in ADAL.js to MSAL.js update
-MSAL.js brings feature parity with ADAL.js for Azure AD authentication scenarios. To make the migration from ADAL.js to MSAL.js easy and to avoid prompting your users to sign in again, the library reads the ID token representing userΓÇÖs session in ADAL.js cache, and seamlessly signs in the user in MSAL.js.
+MSAL.js brings feature parity with ADAL.js for Azure AD authentication scenarios. To make the migration from ADAL.js to MSAL.js easy and to avoid prompting your users to sign in again, the library reads the ID token representing userΓÇÖs session in ADAL.js cache, and seamlessly signs in the user in MSAL.js.
To take advantage of the single sign-on (SSO) behavior when updating from ADAL.js, you will need to ensure the libraries are using `localStorage` for caching tokens. Set the `cacheLocation` to `localStorage` in both the MSAL.js and ADAL.js configuration at initialization as follows:
active-directory Msal Js Use Ie Browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-use-ie-browser.md
Title: Issues on Internet Explorer (MSAL.js) | Azure
description: Use the Microsoft Authentication Library for JavaScript (MSAL.js) with Internet Explorer browser. -+
Last updated 05/16/2019-+ #Customer intent: As an application developer, I want to learn about issues with MSAL.js library so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration-confidential-client.md
Updating code depends on the confidential client scenario. Some steps are common
The confidential client scenarios are as listed below: -- [Daemon scenarios](/active-directory/develop/msal-net-migration-confidential-client?tabs=daemon#migrate-daemon-scenarios) supported by web apps, web APIs, and daemon console applications.-- [Web api calling downstream web apis](/active-directory/develop/msal-net-migration-confidential-client?tabs=obo#migrate-on-behalf-of-calls-obo-in-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.-- [Web app calling web apis](/active-directory/develop/msal-net-migration-confidential-client?tabs=authcode#migrate-acquiretokenbyauthorizationcodeasync-in-web-apps) supported by Web apps that sign in users and call a downstream web API.
+- [Daemon scenarios](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=daemon#migrate-daemon-scenarios) supported by web apps, web APIs, and daemon console applications.
+- [Web api calling downstream web apis](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=obo#migrate-on-behalf-of-calls-obo-in-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.
+- [Web app calling web apis](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=authcode#migrate-acquiretokenbyauthorizationcodeasync-in-web-apps) supported by Web apps that sign in users and call a downstream web API.
You may have provided a wrapper around ADAL.NET to handle certificates and caching. This article uses the same approach to illustrate the migration from ADAL.NET to MSAL.NET process. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
The ADAL code for your app uses daemon scenarios if it contains a call to `Authe
- A resource (App ID URI) as a first parameter. - A `IClientAssertionCertificate` or `ClientAssertion` as the second parameter.
-It doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using [on behalf of flow](/active-directory/develop/msal-net-migration-confidential-client?#migrate-on-behalf-of-calls-obo-in-web-apis) scenario.
+It doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using [on behalf of flow](/azure/active-directory/develop/msal-net-migration-confidential-client?#migrate-on-behalf-of-calls-obo-in-web-apis) scenario.
#### Update the code of daemon scenarios
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
The [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web)
| - | | | `AddInMemoryTokenCaches` | In memory token cache serialization. This implementation is great in samples. It's also good in production applications provided you don't mind if the token cache is lost when the web app is restarted. `AddInMemoryTokenCaches` takes an optional parameter of type `MsalMemoryTokenCacheOptions` that enables you to specify the duration after which the cache entry will expire unless it's used. | `AddSessionTokenCaches` | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims as the cookie would become too large.
-| `AddDistributedTokenCaches` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation, therefore enabling you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed.md).
+| `AddDistributedTokenCaches` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation, therefore enabling you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed).
Here's an example of code using the in-memory cache in the [ConfigureServices](/dotnet/api/microsoft.aspnetcore.hosting.startupbase.configureservices) method of the [Startup](/aspnet/core/fundamentals/startup) class in an ASP.NET Core application:
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-daemon.md
Title: "Quickstart: Call Microsoft Graph from a Java daemon | Azure"
description: In this quickstart, you learn how a Java app can get an access token and call an API protected by Microsoft identity platform endpoint, using the app's own identity -+
Last updated 01/22/2021-+ #Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow. # Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity
-In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
> [!div renderon="docs"] > ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-java-daemon/java-console-daemon.svg)
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript.md
Title: "Quickstart: Sign in users in JavaScript single-page apps | Azure"
description: In this quickstart, you learn how a JavaScript app can call an API that requires access tokens issued by the Microsoft identity platform. -+
Last updated 04/11/2019-+ #Customer intent: As an app developer, I want to learn how to get access tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use. > 1. This quickstart requires the [Implicit grant flow](v2-oauth2-implicit-grant-flow.md) to be enabled. Under **Manage**, select **Authentication**. > 1. Under **Platform Configurations** > **Add a platform**. Select **Web**.
-> 1. Set the **Redirect URI** value to `http://localhost:3000/`.
+> 1. Set the **Redirect URI** value to `http://localhost:3000/`.
> 1. Select **Access Tokens** and **ID Tokens** under the **Implicit grant and hybrid flows** . > 1. Select **Configure**.
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
Title: Code samples for Microsoft identity platform
-description: Provides an index of available Microsoft identity platform code samples, organized by scenario.
+ Title: Code samples for Microsoft identity platform authentication and authorization
+description: An index of Microsoft-maintained code samples demonstrating authentication and authorization in several application types, development languages, and frameworks.
Previously updated : 11/04/2020 Last updated : 07/06/2021
-# Microsoft identity platform code samples (v2.0 endpoint)
+# Microsoft identity platform code samples
-You can use the Microsoft identity platform to:
+These code samples, built and maintained by Microsoft, demonstrate authentication and authorization by using Azure AD and the Microsoft identity platform in several [application types](v2-app-types.md), development languages, and frameworks.
-- Add authentication and authorization to your web applications and web APIs.-- Require an access token to access a protected web API.
+- Sign in users to web applications and provide authorized access to protected web APIs.
+- Protect a web API by requiring an access token to perform API operations.
-This article briefly describes and provides you with links to samples for the Microsoft identity platform. These samples show you how it's done, and also provide code snippets that you can use in your applications. On the code sample page, you'll find detailed readme topics that help with requirements, installation, and setup. Comments within the code help you understand the critical sections.
-
-To understand the basic scenario for each sample type, see [App types for the Microsoft identity platform](v2-app-types.md).
-
-You can also contribute to the samples on GitHub. To learn how, see [Microsoft Azure Active Directory samples and documentation](https://github.com/Azure-Samples?page=3&query=active-directory).
+Each code sample includes a _README.md_ file that describes how to build the project (if applicable) and run the sample application. Comments in the code help you understand critical sections that implementing authentication and authorization using authentication libraries and the identity platform.
## Single-page applications
The following samples illustrate web applications that sign in users. Some sampl
> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2) | ASP.NET Core Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/README.md) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/1-5-B2C/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md) <br/> &#8226; [Customize token cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-2-TokenCache/README.md) <br/> &#8226; [Call Graph (multi-tenant)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md) <br/> &#8226; [Call Azure REST APIs](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/3-WebApp-multi-APIs/README.md) <br/> &#8226; [Protect web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-1-MyOrg/README.md) <br/> &#8226; [Protect web API (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-2-B2C/README.md) <br/> &#8226; [Protect multi-tenant web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-3-AnyOrg/Readme.md) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) <br/> &#8226; [Deploy to Azure Storage & App Service](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/6-Deploy-to-Azure/README.md) | &#8226; [MSAL.NET](https://aka.ms/msal-net) <br/> &#8226; [Microsoft.Identity.Web](https://aka.ms/microsoft-identity-web) | &#8226; [OIDC flow](./v2-protocols-oidc.md) <br/> &#8226; [Auth code flow](./v2-oauth2-auth-code-flow.md) <br/> &#8226; [On-Behalf-Of (OBO) flow](./v2-oauth2-on-behalf-of-flow.md) | > | Blazor | [GitHub repo](https://github.com/Azure-Samples/ms-identity-blazor-server/) | Blazor Server Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/MyOrg) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/B2C) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/MyOrg) <br/> &#8226; [Call web API (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/B2C) | MSAL.NET | | > | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | [Advanced Token Cache Scenarios](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | &#8226; [MSAL.NET](https://aka.ms/msal-net) <br/> &#8226; [Microsoft.Identity.Web](https://aka.ms/microsoft-identity-web) | [On-Behalf-Of (OBO) flow](./v2-oauth2-on-behalf-of-flow.md) |
+> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | [Use the Conditional Access auth context to perform step\-up authentication ](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | &#8226; [MSAL.NET](https://aka.ms/msal-net) <br/> &#8226; [Microsoft.Identity.Web](https://aka.ms/microsoft-identity-web) | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | [Active Directory FS to Azure AD migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | [MSAL.NET](https://aka.ms/msal-net) | | > | ASP.NET |[GitHub repo](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | [MSAL.NET](https://aka.ms/msal-net) | | > | ASP.NET |[GitHub repo](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [MSAL.NET](https://aka.ms/msal-net) | |
active-directory Scenario Spa App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-app-configuration.md
Title: Configure single-page app | Azure
description: Learn how to build a single-page application (app's code configuration) -+
Last updated 02/11/2020-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
Learn how to configure the code for your single-page application (SPA).
-## Microsoft libraries supporting single-page apps
+## Microsoft libraries supporting single-page apps
The following Microsoft libraries support single-page apps:
active-directory Scenario Spa Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-overview.md
Title: JavaScript single-page app scenario
+ Title: JavaScript single-page app scenario
description: Learn how to build a single-page application (scenario overview) by using the Microsoft identity platform. -+
Last updated 05/07/2019-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Scenario Spa Production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-production.md
Title: Move single-page app to production
+ Title: Move single-page app to production
description: Learn how to build a single-page application (move to production) -+
Last updated 05/07/2019-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
Now that you know how to acquire a token to call web APIs, here are some things
## Deploy your app
-Check out a [deployment sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter3) for learning how to deploy your SPA and Web API projects with Azure Storage and Azure App Services, respectively.
+Check out a [deployment sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter3) for learning how to deploy your SPA and Web API projects with Azure Storage and Azure App Services, respectively.
## Code samples
active-directory Scenario Spa Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-sign-in.md
Title: Single-page app sign-in & sign-out
+ Title: Single-page app sign-in & sign-out
description: Learn how to build a single-page application (sign-in) -+
Last updated 02/11/2020-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
import { MsalAuthenticationTemplate, useMsal } from "@azure/msal-react";
function WelcomeUser() { const { accounts } = useMsal(); const username = accounts[0].username;
-
+ return <p>Welcome, {username}</p> }
function SignInButton() {
function WelcomeUser() { const { accounts } = useMsal(); const username = accounts[0].username;
-
+ return <p>Welcome, {username}</p> }
function handleResponse(response) {
} else { // In case multiple accounts exist, you can select const currentAccounts = myMsal.getAllAccounts();
-
+ if (currentAccounts.length === 0) { // no accounts signed-in, attempt to sign a user in myMsal.loginRedirect(loginRequest);
import { MsalAuthenticationTemplate, useMsal } from "@azure/msal-react";
function WelcomeUser() { const { accounts } = useMsal(); const username = accounts[0].username;
-
+ return <p>Welcome, {username}</p> }
function SignInButton() {
function WelcomeUser() { const { accounts } = useMsal(); const username = accounts[0].username;
-
+ return <p>Welcome, {username}</p> }
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Title: "Tutorial: Create a JavaScript single-page app that uses the Microsoft id
description: In this tutorial, you build a JavaScript single-page app (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf. -+
Last updated 08/06/2020-+
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth-ropc.md
Previously updated : 05/18/2020 Last updated : 06/25/2021
The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password
> > * The Microsoft identity platform only supports ROPC for Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint. > * Personal accounts that are invited to an Azure AD tenant can't use ROPC.
-> * Accounts that don't have passwords can't sign in through ROPC. For this scenario, we recommend that you use a different flow for your app instead.
+> * Accounts that don't have passwords can't sign in with ROPC, which means features like SMS sign-in, FIDO, and the Authenticator app won't work with that flow. Use a flow other than ROPC if your app or users require these features.
> * If users need to use [multi-factor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to log in to the application, they will be blocked instead. > * ROPC is not supported in [hybrid identity federation](../hybrid/whatis-fed.md) scenarios (for example, Azure AD and ADFS used to authenticate on-premises accounts). If users are full-page redirected to an on-premises identity providers, Azure AD is not able to test the username and password against that identity provider. [Pass-through authentication](../hybrid/how-to-connect-pta.md) is supported with ROPC, however. > * An exception to a hybrid identity federation scenario would be the following: Home Realm Discovery policy with AllowCloudPasswordValidation set to TRUE will enable ROPC flow to work for federated users when on-premises password is synced to cloud. For more information, see [Enable direct ROPC authentication of federated users for legacy applications](../manage-apps/configure-authentication-for-federated-users-portal.md#enable-direct-ropc-authentication-of-federated-users-for-legacy-applications). + ## Protocol diagram The following diagram shows the ROPC flow.
The following diagram shows the ROPC flow.
The ROPC flow is a single request: it sends the client identification and user's credentials to the IDP, and then receives tokens in return. The client must request the user's email address (UPN) and password before doing so. Immediately after a successful request, the client should securely release the user's credentials from memory. It must never save them.
-> [!TIP]
-> Try executing this request in Postman!
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
-- ```HTTP // Line breaks and spaces are for legibility only. This is a public client, so no secret is required.
The following example shows a successful token response:
You can use the refresh token to acquire new access tokens and refresh tokens using the same flow described in the [OAuth Code flow documentation](v2-oauth2-auth-code-flow.md#refresh-the-access-token). + ### Error response If the user hasn't provided the correct username or password, or the client hasn't received the requested consent, authentication will fail.
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Previously updated : 03/29/2021 Last updated : 06/30/2021
This article describes how to program directly against the protocol in your appl
The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). It's used to perform authentication and authorization in the majority of app types, including [single page apps](v2-app-types.md#single-page-apps-javascript), [web apps](v2-app-types.md#web-apps), and [natively installed apps](v2-app-types.md#mobile-and-native-apps). The flow enables apps to securely acquire access_tokens that can be used to access resources secured by the Microsoft identity platform, as well as refresh tokens to get additional access_tokens, and ID tokens for the signed in user. + ## Protocol diagram At a high level, the entire authentication flow for an application looks a bit like this:
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&client_secret=JqQX2PNo9bpM0uEihUPzyrh // NOTE: Only required for web apps. This secret needs to be URL-Encoded. ```
-> [!TIP]
-> Try executing this request in Postman! (Don't forget to replace the `code`)
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://www.getpostman.com/collections/dba7e9c2e0870702dfc6)
- | Parameter | Required/optional | Description | ||-|-| | `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
Error responses will look like:
Now that you've successfully acquired an `access_token`, you can use the token in requests to web APIs by including it in the `Authorization` header:
-> [!TIP]
-> Execute this request in Postman! (Replace the `Authorization` header first)
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
- ```HTTP GET /v1.0/me/messages Host: https://graph.microsoft.com
client_id=535fb089-9ff3-47b6-9bfb-4f1264799865
&client_secret=sampleCredentia1s // NOTE: Only required for web apps. This secret needs to be URL-Encoded ```
-> [!TIP]
-> Try executing this request in Postman! (Don't forget to replace the `refresh_token`)
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
->
- | Parameter | Type | Description | ||-|--| | `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
A successful token response will look like:
| `refresh_token` | A new OAuth 2.0 refresh token. You should replace the old refresh token with this newly acquired refresh token to ensure your refresh tokens remain valid for as long as possible. <br> **Note:** Only provided if `offline_access` scope was requested.| | `id_token` | An unsigned JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it should not rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. | + #### Error response ```json
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Previously updated : 6/8/2021 Last updated : 06/30/2021 -+
The OAuth 2.0 client credentials grant flow permits a web service (confidential
In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there is no user involved in the authentication. This article covers both the steps needed to [authorize an application to call an API](#application-permissions), as well as [how to get the tokens needed to call that API](#get-a-token). + ## Protocol diagram The entire client credentials flow looks similar to the following diagram. We describe each of the steps later in this article.
If you sign the user into your app, you can identify the organization to which t
When you're ready to request permissions from the organization's admin, you can redirect the user to the Microsoft identity platform *admin consent endpoint*.
-> [!TIP]
-> Try executing this request in Postman! (Use your own app ID for best results - the tutorial application won't request useful permissions.)
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
- ```HTTP // Line breaks are for legibility only.
After you've received a successful response from the app provisioning endpoint,
After you've acquired the necessary authorization for your application, proceed with acquiring access tokens for APIs. To get a token by using the client credentials grant, send a POST request to the `/token` Microsoft identity platform:
-> [!TIP]
-> Try executing this request in Postman! (Use your own app ID for best results - the tutorial application won't request useful permissions.)
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
- ### First case: Access token request with a shared secret ```HTTP
scope=https%3A%2F%2Fgraph.microsoft.com%2F.default
| `client_assertion` | Required | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](active-directory-certificate-credentials.md) to learn how to register your certificate and the format of the assertion.| | `grant_type` | Required | Must be set to `client_credentials`. |
-Notice that the parameters are almost the same as in the case of the request by shared secret except that the client_secret parameter is replaced by two parameters: a client_assertion_type and client_assertion.
+The parameters for the certificate-based request differ in only one way from the shared secret-based request: the `client_secret` parameter is replaced by the `client_assertion_type` and `client_assertion` parameters.
### Successful response
-A successful response looks like this:
+A successful response from either method looks like this:
```json {
A successful response looks like this:
| `token_type` | Indicates the token type value. The only type that the Microsoft identity platform supports is `bearer`. | | `expires_in` | The amount of time that an access token is valid (in seconds). | + ### Error response An error response looks like this:
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-device-code.md
Previously updated : 11/19/2019 Last updated : 06/25/2021 -+
The Microsoft identity platform supports the [device authorization grant](https:
This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md). + ## Protocol diagram The entire device code flow looks similar to the next diagram. We describe each of the steps later in this article.
The entire device code flow looks similar to the next diagram. We describe each
The client must first check with the authentication server for a device and user code that's used to initiate authentication. The client collects this request from the `/devicecode` endpoint. In this request, the client should also include the permissions it needs to acquire from the user. From the moment this request is sent, the user has only 15 minutes to sign in (the usual value for `expires_in`), so only make this request when the user has indicated they're ready to sign in.
-> [!TIP]
-> Try executing this request in Postman!
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
- ```HTTP // Line breaks are for legibility only.
A successful token response will look like:
| `refresh_token` | Opaque string | Issued if the original `scope` parameter included `offline_access`. | You can use the refresh token to acquire new access tokens and refresh tokens using the same flow documented in the [OAuth Code flow documentation](v2-oauth2-auth-code-flow.md#refresh-the-access-token).+
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Previously updated : 11/30/2020 Last updated : 06/25/2021
The Microsoft identity platform supports the OAuth 2.0 Implicit Grant flow as de
[!INCLUDE [suggest-msal-from-protocols](includes/suggest-msal-from-protocols.md)] + ## Prefer the auth code flow With the plans for [third party cookies to be removed from browsers](reference-third-party-cookies-spas.md), the **implicit grant flow is no longer a suitable authentication method**. The [silent SSO features](#getting-access-tokens-silently-in-the-background) of the implicit flow do not work without third party cookies, causing applications to break when they attempt to get a new token. We strongly recommend that all new applications use the [authorization code flow](v2-oauth2-auth-code-flow.md) that now supports single page apps in place of the implicit flow, and that [existing single page apps begin migrating to the authorization code flow](migrate-spa-implicit-to-auth-code.md) as well.
code=0.AgAAktYV-sfpYESnQynylW_UKZmH-C9y_G1A
| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | + #### Error response Error responses may also be sent to the `redirect_uri` so the app can handle them appropriately:
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Previously updated : 08/7/2020 Last updated : 06/25/2021
# Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow - The OAuth 2.0 On-Behalf-Of flow (OBO) serves the use case where an application invokes a service/web API, which in turn needs to call another service/web API. The idea is to propagate the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform, on behalf of the user. This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md). - As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead. For more info about which clients can perform OBO calls, see [limitations](#client-limitations). + ## Protocol diagram Assume that the user has been authenticated on an application using the [OAuth 2.0 authorization code grant flow](v2-oauth2-auth-code-flow.md) or another login flow. At this point, the application has an access token *for API A* (token A) with the user's claims and consent to access the middle-tier web API (API A). Now, API A needs to make an authenticated request to the downstream web API (API B).
The following example shows a success response to a request for an access token
The above access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is setup to accept v1.0 tokens, so the Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token - this way the resource can always get the right format of token regardless of how or where the token was requested by the client.
-Only applications should look at access tokens. Clients **must not** inspect them. Inspecting access tokens for other apps in your code will result in your app unexpectedly breaking when that app changes the format of their tokens or starts encrypting them.
### Error response example
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-protocols-oidc.md
Previously updated : 05/22/2020 Last updated : 06/23/2021
OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you
[OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html) extends the OAuth 2.0 *authorization* protocol for use as an *authentication* protocol, so that you can do single sign-on using OAuth. OpenID Connect introduces the concept of an *ID token*, which is a security token that allows the client to verify the identity of the user. The ID token also gets basic profile information about the user. It also introduces the [UserInfo endpoint](userinfo.md), an API that returns information about the user. ## Protocol diagram: Sign-in
Response parameters mean the same thing regardless of the flow used to acquire t
| `id_token` | The ID token that the app requested. You can use the ID token to verify the user's identity and begin a session with the user. You'll find more details about ID tokens and their contents in the [`id_tokens` reference](id-tokens.md). | | `state` | If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | + ### Error response Error responses might also be sent to the redirect URI so that the app can handle them appropriately. An error response looks like this:
active-directory Concept Azure Ad Join Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-azure-ad-join-hybrid.md
Previously updated : 06/27/2019 Last updated : 06/10/2021
# Hybrid Azure AD joined devices
-For more than a decade, many organizations have used the domain join to their on-premises Active Directory to enable:
+Organizations with existing Active Directory implementations can benefit from some of the functionality provided by Azure Active Directory (Azure AD) by implementing hybrid Azure AD joined devices. These devices are joined to your on-premises Active Directory and registered with Azure Active Directory.
-- IT departments to manage work-owned devices from a central location.-- Users to sign in to their devices with their Active Directory work or school accounts.-
-Typically, organizations with an on-premises footprint rely on imaging methods to provision devices, and they often use **Configuration Manager** or **group policy (GP)** to manage them.
-
-If your environment has an on-premises AD footprint and you also want benefit from the capabilities provided by Azure Active Directory, you can implement hybrid Azure AD joined devices. These devices, are devices that are joined to your on-premises Active Directory and registered with your Azure Active Directory.
+Hybrid Azure AD joined devices require network line of sight to your on-premises domain controllers periodically. Without this connection, devices become unusable. If this requirement is a concern, consider [Azure AD joining](concept-azure-ad-join.md) your devices.
| Hybrid Azure AD Join | Description | | | |
If your environment has an on-premises AD footprint and you also want benefit fr
Use Azure AD hybrid joined devices if: -- You have Win32 apps deployed to these devices that rely on Active Directory machine authentication.
+- You support down-level devices running Windows 7 and 8.1.
- You want to continue to use Group Policy to manage device configuration. - You want to continue to use existing imaging solutions to deploy and configure devices.-- You must support down-level Windows 7 and 8.1 devices in addition to Windows 10
+- You have Win32 apps deployed to these devices that rely on Active Directory machine authentication.
## Next steps
active-directory Concept Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-azure-ad-join.md
Title: What is an Azure AD joined device?
-description: Learn about Azure AD joined devices, and how device identity management can help you to manage devices that are accessing resources in your environment.
+description: Azure AD joined devices can help you to manage devices accessing resources in your environment.
Previously updated : 07/20/2020 Last updated : 06/10/2021
# Azure AD joined devices
-Azure AD join is intended for organizations that want to be cloud-first or cloud-only. Any organization can deploy Azure AD joined devices no matter the size or industry. Azure AD join works even in a hybrid environment, enabling access to both cloud and on-premises apps and resources.
+Any organization can deploy Azure AD joined devices no matter the size or industry. Azure AD join works even in hybrid environments, enabling access to both cloud and on-premises apps and resources.
| Azure AD Join | Description | | | |
Azure AD join is intended for organizations that want to be cloud-first or cloud
| | Applicable to all users in an organization | | **Device ownership** | Organization | | **Operating Systems** | All Windows 10 devices except Windows 10 Home |
-| | [Windows Server 2019 Virtual Machines running in Azure](howto-vm-sign-in-azure-ad-windows.md) (Server core is not supported) |
-| **Provisioning** | Self-service: Windows OOBE or Settings |
+| | [Windows Server 2019 Virtual Machines running in Azure](howto-vm-sign-in-azure-ad-windows.md) (Server core isn't supported) |
+| **Provisioning** | Self-service: Windows Out of Box Experience (OOBE) or Settings |
| | Bulk enrollment | | | Windows Autopilot | | **Device sign in options** | Organizational accounts using: |
While Azure AD join is primarily intended for organizations that do not have an
- You want to manage a group of users in Azure AD instead of in Active Directory. This scenario can apply, for example, to seasonal workers, contractors, or students. - You want to provide joining capabilities to workers in remote branch offices with limited on-premises infrastructure.
-You can configure Azure AD joined devices for all Windows 10 devices with the exception of Windows 10 Home.
+You can configure Azure AD joined devices for all Windows 10 devices except for Windows 10 Home.
The goal of Azure AD joined devices is to simplify:
active-directory Concept Azure Ad Register https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-azure-ad-register.md
Previously updated : 06/27/2019 Last updated : 06/09/2021
# Azure AD registered devices
-The goal of Azure AD registered devices is to provide your users with support for the Bring Your Own Device (BYOD) or mobile device scenarios. In these scenarios, a user can access your organizationΓÇÖs Azure Active Directory controlled resources using a personal device.
+The goal of Azure AD registered devices is to provide your users with support for the bring your own device (BYOD) or mobile device scenarios. In these scenarios, a user can access your organizationΓÇÖs resources using a personal device.
| Azure AD Registered | Description | | | | | **Definition** | Registered to Azure AD without requiring organizational account to sign in to the device | | **Primary audience** | Applicable to all users with the following criteria: |
-| | Bring your own device (BYOD) |
+| | Bring your own device |
| | Mobile devices | | **Device ownership** | User or Organization |
-| **Operating Systems** | Windows 10, iOS, Android, and MacOS |
+| **Operating Systems** | Windows 10, iOS, Android, and macOS |
| **Provisioning** | Windows 10 ΓÇô Settings | | | iOS/Android ΓÇô Company Portal or Microsoft Authenticator app |
-| | MacOS ΓÇô Company Portal |
+| | macOS ΓÇô Company Portal |
| **Device sign in options** | End-user local credentials | | | Password | | | Windows Hello | | | PIN |
-| | Biometrics or Pattern for other devices |
+| | Biometrics or pattern for other devices |
| **Device management** | Mobile Device Management (example: Microsoft Intune) | | | Mobile Application Management | | **Key capabilities** | SSO to cloud resources |
The goal of Azure AD registered devices is to provide your users with support fo
![Azure AD registered devices](./media/concept-azure-ad-register/azure-ad-registered-device.png)
-Azure AD registered devices are signed in to using a local account like a Microsoft account on a Windows 10 device, but additionally have an Azure AD account attached for access to organizational resources. Access to resources in the organization can be further limited based on that Azure AD account and Conditional Access policies applied to the device identity.
+Azure AD registered devices are signed in to using a local account like a Microsoft account on a Windows 10 device. These devices have an Azure AD account for access to organizational resources. Access to resources in the organization can be limited based on that Azure AD account and Conditional Access policies applied to the device identity.
Administrators can secure and further control these Azure AD registered devices using Mobile Device Management (MDM) tools like Microsoft Intune. MDM provides a means to enforce organization-required configurations like requiring storage to be encrypted, password complexity, and security software kept updated.
Azure AD registration can be accomplished when accessing a work application for
## Scenarios
-A user in your organization wants to access tools for email, reporting time-off, and benefits enrollment from their home PC. Your organization has these tools behind a Conditional Access policy that requires access from an Intune compliant device. The user adds their organization account and registers their home PC with Azure AD and the required Intune policies are enforced giving the user access to their resources.
+A user in your organization wants to access your benefits enrollment tool from their home PC. Your organization requires that anyone accesses this tool from an Intune compliant device. The user registers their home PC with Azure AD and the required Intune policies are enforced giving the user access to their resources.
Another user wants to access their organizational email on their personal Android phone that has been rooted. Your company requires a compliant device and has created an Intune compliance policy to block any rooted devices. The employee is stopped from accessing organizational resources on this device.
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-plan.md
Previously updated : 05/28/2021 Last updated : 06/10/2021
This article assumes that you are familiar with the [Introduction to device iden
> [!NOTE] > The minimum required domain controller version for Windows 10 hybrid Azure AD join is Windows Server 2008 R2.
+Hybrid Azure AD joined devices require network line of sight to your domain controllers periodically. Without this connection, devices become unusable.
+
+Scenarios that break without line of sight to your domain controllers:
+
+- Device password change
+- User password change (Cached credentials)
+- TPM reset
+ ## Plan your implementation To plan your hybrid Azure AD implementation, you should familiarize yourself with:
As a first planning step, you should review your environment and determine wheth
### Handling devices with Azure AD registered state
-If your Windows 10 domain joined devices are [Azure AD registered](overview.md#getting-devices-in-azure-ad) to your tenant, it could lead to a dual state of Hybrid Azure AD joined and Azure AD registered device. We recommend upgrading to Windows 10 1803 (with KB4489894 applied) or above to automatically address this scenario. In pre-1803 releases, you will need to remove the Azure AD registered state manually before enabling Hybrid Azure AD join. In 1803 and above releases, the following changes have been made to avoid this dual state:
+If your Windows 10 domain joined devices are [Azure AD registered](concept-azure-ad-register.md) to your tenant, it could lead to a dual state of Hybrid Azure AD joined and Azure AD registered device. We recommend upgrading to Windows 10 1803 (with KB4489894 applied) or above to automatically address this scenario. In pre-1803 releases, you will need to remove the Azure AD registered state manually before enabling Hybrid Azure AD join. In 1803 and above releases, the following changes have been made to avoid this dual state:
- Any existing Azure AD registered state for a user would be automatically removed <i>after the device is Hybrid Azure AD joined and the same user logs in</i>. For example, if User A had an Azure AD registered state on the device, the dual state for User A is cleaned up only when User A logs in to the device. If there are multiple users on the same device, the dual state is cleaned up individually when those users log in. In addition to removing the Azure AD registered state, Windows 10 will also unenroll the device from Intune or other MDM, if the enrollment happened as part of the Azure AD registration via auto-enrollment. - Azure AD registered state on any local accounts on the device is not impacted by this change. It is only applicable to domain accounts. So Azure AD registered state on local accounts is not removed automatically even after user logon, since the user is not a domain user.
The table below provides details on support for these on-premises AD UPNs in Win
| -- | -- | -- | -- | | Routable | Federated | From 1703 release | Generally available | | Non-routable | Federated | From 1803 release | Generally available |
-| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lockscreen is not supported. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
+| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lock screen is not supported. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
| Non-routable | Managed | Not supported | | ## Next steps
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/overview.md
Title: What is device identity in Azure Active Directory?
-description: Learn how device identity management can help you to manage the devices that are accessing resources in your environment.
+description: Device identities and their use cases
Previously updated : 07/20/2020 Last updated : 06/09/2021 --
-#Customer intent: As an IT admin, I want to learn how to create and manage device identities in Azure AD, so that I can ensure that my users are accessing my resources from devices that meet my standards for security and compliance.
+ # What is a device identity?
-With the proliferation of devices of all shapes and sizes and the Bring Your Own Device (BYOD) concept, IT professionals are faced with two somewhat opposing goals:
--- Allow end users to be productive wherever and whenever-- Protect the organization's assets-
-To protect these assets, IT staff need to first manage the device identities. IT staff can build on the device identity with tools like Microsoft Intune to ensure standards for security and compliance are met. Azure Active Directory (Azure AD) enables single sign-on to devices, apps, and services from anywhere through these devices.
--- Your users get access to your organization's assets they need. -- Your IT staff get the controls they need to secure your organization.-
-Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices.
-
-## Getting devices in Azure AD
-
-To get a device in Azure AD, you have multiple options:
--- **Azure AD registered**
- - Devices that are Azure AD registered are typically personally owned or mobile devices, and are signed in with a personal Microsoft account or another local account.
- - Windows 10
- - iOS
- - Android
- - MacOS
-- **Azure AD joined**
- - Devices that are Azure AD joined are owned by an organization, and are signed in with an Azure AD account belonging to that organization. They exist only in the cloud.
- - Windows 10
- - [Windows Server 2019 Virtual Machines running in Azure](howto-vm-sign-in-azure-ad-windows.md) (Server core is not supported)
-- **Hybrid Azure AD joined**
- - Devices that are hybrid Azure AD joined are owned by an organization, and are signed in with an Active Directory Domain Services account belonging to that organization. They exist in the cloud and on-premises.
- - Windows 7, 8.1, or 10
- - Windows Server 2008 or newer
+A [device identity](/graph/api/resources/device?view=graph-rest-1.0) is an object in Azure Active Directory (Azure AD). This device object is similar to users, groups, or applications. A device identity gives administrators information they can use when making access or configuration decisions.
![Devices displayed in Azure AD Devices blade](./media/overview/azure-active-directory-devices-all-devices.png)
-> [!NOTE]
-> A hybrid state refers to more than just the state of a device. For a hybrid state to be valid, a valid Azure AD user also is required.
+There are three ways to get a device identity:
-## Device management
+- Azure AD registration
+- Azure AD join
+- Hybrid Azure AD join
-Devices in Azure AD can be managed using Mobile Device Management (MDM) tools like Microsoft Intune, Microsoft Endpoint Configuration Manager, Group Policy (hybrid Azure AD join), Mobile Application Management (MAM) tools, or other third-party tools.
+Device identities are a prerequisite for scenarios like [device-based Conditional Access policies](../conditional-access/require-managed-devices.md) and [Mobile Device Management with Microsoft Endpoint Manager](/mem/endpoint-manager-overview).
-## Resource access
+## Modern device scenario
-Registering and joining devices to Azure AD gives your users Seamless Sign-on (SSO) to cloud resources. This process also allows administrators the ability to apply Conditional Access policies to resources based on the device they are accessed from.
+The modern device scenario focuses on two of these methods:
-> [!NOTE]
-> Device-based Conditional Access policies require either hybrid Azure AD joined devices or compliant Azure AD joined or Azure AD registered devices.
+- [Azure AD registration](concept-azure-ad-register.md)
+ - Bring your own device (BYOD)
+ - Mobile device (cell phone and tablet)
+- [Azure AD join](concept-azure-ad-register.md)
+ - Windows 10 devices owned by your organization
+ - [Windows Server 2019 and newer servers in your organization running as VMs in Azure](howto-vm-sign-in-azure-ad-windows.md)
-The primary refresh token (PRT) contains information about the device and is required for SSO. If you have a device-based Conditional Access policy set on an application, without the PRT, access is denied. Hybrid Conditional Access policies require a hybrid state device and a valid user who is signed in.
+[Hybrid Azure AD join](concept-azure-ad-join-hybrid.md) is seen as an interim step on the road to Azure AD join. Hybrid Azure AD join provides organizations support for downlevel Windows versions back to Windows 7 and Server 2008. All three scenarios can coexist in a single organization.
-Devices that are Azure AD joined or hybrid Azure AD joined benefit from SSO to your organization's on-premises resources as well as cloud resources. More information can be found in the article, [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
+## Resource access
-## Device security
+Registering and joining devices to Azure AD gives users Seamless Sign-on (SSO) to cloud-based resources.
-- **Azure AD registered devices** utilize an account managed by the end user, this account is either a Microsoft account or another locally managed credential secured with one or more of the following.
- - Password
- - PIN
- - Pattern
- - Windows Hello
-- **Azure AD joined or hybrid Azure AD joined devices** utilize an organizational account in Azure AD secured with one or more of the following.
- - Password
- - Windows Hello for Business
+Devices that are Azure AD joined benefit from [SSO to your organization's on-premises resources](azuread-join-sso.md).
## Provisioning
-Getting devices in to Azure AD can be done in a self-service manner or a controlled provisioning process by administrators.
-
-## Summary
-
-With device identity management in Azure AD, you can:
--- Simplify the process of bringing and managing devices in Azure AD-- Provide your users with an easy to use access to your organization's cloud-based resources
+Getting devices in to Azure AD can be done in a self-service manner or a controlled process managed by administrators.
## License requirements
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Title: Provisioning logs in Azure Active Directory (preview) | Microsoft Docs
+ Title: Provisioning logs in Azure Active Directory | Microsoft Docs
description: Overview of the provisioning logs in Azure Active Directory. documentationcenter: ''
-# Provisioning logs in Azure Active Directory (preview)
+# Provisioning logs in Azure Active Directory
As an IT administrator, you want to know how your IT environment is doing. The information about your systemΓÇÖs health enables you to assess whether and how you need to respond to potential issues.
Use the following table to better understand how to resolve errors that you find
* [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) * [Problem configuring user provisioning to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem.md)
-* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
+* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/15/2019 Last updated : 7/1/2021 # Rotate certificates in Azure Kubernetes Service (AKS)
AKS generates and uses the following certificates, Certificate Authorities, and
> [!NOTE] > AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other certificates expire after two years. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools. >
-> Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the certificate details for the *myAKSCluster* cluster in resource group *rg*
+> Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the client certificate details for the *myAKSCluster* cluster in resource group *rg*
> ```console > kubectl config view --raw -o jsonpath="{.users[?(@.name == 'clusterUser_rg_myAKSCluster')].user.client-certificate-data}" | base64 -d | openssl x509 -text | grep -A2 Validity > ```
+* Check expiration date of apiserver certificate
+```console
+curl https://{apiserver-fqdn} -k -v 2>&1 |grep expire
+```
+ * Check expiration date of certificate on VMAS agent node ```console
-az vm run-command invoke -g MC_rg_myAKSCluster_region -n vm-name --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/client.crt -noout -enddate"
+az vm run-command invoke -g MC_rg_myAKSCluster_region -n vm-name --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/apiserver.crt -noout -enddate"
``` * Check expiration date of certificate on one VMSS agent node
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Otherwise, your deployment method will depend on your archive type:
To deploy .jar files to Java SE, use the `/api/zipdeploy/` endpoint of the Kudu site. For more information on this API, please see [this documentation](./deploy-zip.md#rest). > [!NOTE]
-> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](/app-service/faq-app-service-linux#built-in-images) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
+> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](/azure/app-service/faq-app-service-linux#built-in-images) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
### Tomcat
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-diagnostic-logs.md
Title: Enable diagnostics logging
description: Learn how to enable diagnostic logging and add instrumentation to your application, as well as how to access the information logged by Azure. ms.assetid: c9da27b2-47d4-4c33-a3cb-1819955ee43b Previously updated : 09/17/2019 Last updated : 07/06/2021
The following table shows the supported log types and descriptions:
| AppServiceEnvironmentPlatformLogs | Yes | N/A | Yes | Yes | App Service Environment: scaling, configuration changes, and status logs| | AppServiceAuditLogs | Yes | Yes | Yes | Yes | Login activity via FTP and Kudu | | AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; **only available for Premium tier and above** |
-| AppServiceAppLogs | ASP .NET & Tomcat <sup>1</sup> | ASP .NET & Tomcat <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Application logs |
+| AppServiceAppLogs | ASP.NET & Tomcat <sup>1</sup> | ASP.NET & Tomcat <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Application logs |
| AppServiceIPSecAuditLogs | Yes | Yes | Yes | Yes | Requests from IP Rules | | AppServicePlatformLogs | TBA | Yes | Yes | Yes | Container operation logs | | AppServiceAntivirusScanAuditLogs | Yes | Yes | Yes | Yes | [Anti-virus scan logs](https://azure.github.io/AppService/2020/12/09/AzMon-AppServiceAntivirusScanAuditLogs.html) using Microsoft Defender; **only available for Premium tier** |
-<sup>1</sup> For Tomcat apps, add "TOMCAT_USE_STARTUP_BAT" to the app settings and set it to false or 0. Need to be on the *latest* Tomcat version and use *java.util.logging*.
+<sup>1</sup> For Tomcat apps, add `TOMCAT_USE_STARTUP_BAT` to the app settings and set it to `false` or `0`. Need to be on the *latest* Tomcat version and use *java.util.logging*.
-<sup>2</sup> For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to true or to 1.
+<sup>2</sup> For Java SE apps, add `WEBSITE_AZMON_PREVIEW_ENABLED` to the app settings and set it to `true` or to `1`.
## <a name="nextsteps"></a> Next steps * [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md)
automanage Arm Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/arm-deploy.md
The `configurationProfileAssignment` value can be one of the following values:
* "DevTest" Follow these steps to deploy the ARM template:
-1. Save the below ARM template as `azuredeploy.json`
+1. Save the ARM template above as `azuredeploy.json`
1. Run the ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json` 1. Provide the values for machineName, automanageAccountName, and configurationProfileAssignment when prompted 1. You are done!
azure-functions Disable Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/disable-function.md
The recommended way to disable a function is with an app setting in the format `
> [!NOTE] > When you disable an HTTP triggered function by using the methods described in this article, the endpoint may still by accessible when running on your local computer.
+> [!NOTE]
+> At the present, Function names with hyphens (`-`) in them cannot be disabled in Linux-based App Service Plans. If you need to disable your Functions in Linux plans, avoid using hyphens in your Function names.
+ ## Disable a function # [Portal](#tab/portal)
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md
module.exports = function(context, req) {
# [PowerShell](#tab/powershell)
-The following example shows a trigger binding in a *function.json* file and a [PowerShell function](functions-reference-node.md). The function looks for a `name` parameter either in the query string or the body of the HTTP request.
+The following example shows a trigger binding in a *function.json* file and a [PowerShell function](functions-reference-powershell.md). The function looks for a `name` parameter either in the query string or the body of the HTTP request.
```json {
If a function that uses the HTTP trigger doesn't complete within 230 seconds, th
## Next steps -- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
+- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-monitoring.md
By assigning logged items to a category, you have more control over telemetry ge
### Custom telemetry data
-In [C#](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions) and [JavaScript](functions-reference-node.md#log-custom-telemetry), you can use an Application Insights SDK to write custom telemetry data.
+In [C#](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions), [JavaScript](functions-reference-node.md#log-custom-telemetry), and [Python](functions-reference-python.md#log-custom-telemetry), you can use an Application Insights SDK to write custom telemetry data.
### Dependencies
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
More logging methods are available that let you write to the console at differen
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.md).
+### Log custom telemetry
+
+By default, Functions writes output as traces to Application Insights. For more control, you can instead use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure) to send custom telemetry data to your Application Insights instance.
+
+>[!NOTE]
+> To use the OpenCensus Python Extensions, you need to enable [Python Extensions](#python-worker-extensions) by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1` in `local.settings.json` and application settings
+>
+
+```
+// requirements.txt
+...
+opencensus-extension-azure-functions
+opencensus-ext-requests
+```
+
+```python
+import json
+import logging
+
+import requests
+from opencensus.extension.azure.functions import OpenCensusExtension
+from opencensus.trace import config_integration
+
+config_integration.trace_integrations(['requests'])
+
+OpenCensusExtension.configure()
+
+def main(req, context):
+ logging.info('Executing HttpTrigger with OpenCensus extension')
+
+ # You must use context.tracer to create spans
+ with context.tracer.span("parent"):
+ response = requests.get(url='http://example.com')
+
+ return json.dumps({
+ 'method': req.method,
+ 'response': response.status_code,
+ 'ctx_func_name': context.function_name,
+ 'ctx_func_dir': context.function_directory,
+ 'ctx_invocation_id': context.invocation_id,
+ 'ctx_trace_context_Traceparent': context.trace_context.Traceparent,
+ 'ctx_trace_context_Tracestate': context.trace_context.Tracestate,
+ })
+```
+ ## HTTP Trigger and bindings The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-elevation-data.md
To request elevation data in raster tile format using the Postman app:
3. Enter a **Request name** for the request.
-4. Select the collection that you created, and then select **Save**.
-
-5. On the **Builder** tab, select the **GET** HTTP method and then enter the following URL to request the raster tile.
+4. On the **Builder** tab, select the **GET** HTTP method and then enter the following URL to request the raster tile.
```http https://atlas.microsoft.com/map/tile?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0&tilesetId=microsoft.dem&zoom=13&x=6074&y=3432
To request elevation data in raster tile format using the Postman app:
>[!Important] >For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
-6. Select the **Send** button.
+5. Select the **Send** button.
You should receive the raster tile that contains the elevation data in GeoTIFF format. Each pixel within the raster tile raw data is of type `float`. The value of each pixel represents the elevation height in meters.
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
To upload the Drawing package:
4. Select the **POST** HTTP method.
-5. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload-preview):
+5. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload-preview) The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key)::
```http https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=dwgzippackage&subscription-key={Azure-Maps-Primary-Subscription-key} ```
- >[!Important]
- >For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
- 6. Select the **Headers** tab. 7. In the **KEY** field, select `Content-Type`.
To upload the Drawing package:
To check the status of the drawing package and retrieve its unique ID (`udid`):
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **GET** HTTP method.
+4. Select the **GET** HTTP method.
-6. Enter the `status URL` you copied in [Upload a Drawing package](#upload-a-drawing-package). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied in [Upload a Drawing package](#upload-a-drawing-package). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://us.atlas.microsoft.com/mapData/operations/<operationId>?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab.
+7. In the response window, select the **Headers** tab.
-9. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
+8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
:::image type="content" source="./media/tutorial-creator-indoor-maps/resource-location-url.png" alt-text="Copy the resource location URL.":::
You can retrieve metadata from the Drawing package resource. The metadata contai
To retrieve content metadata:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**.
-3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
-
-4. Select the collection you previously created, and then select **Save**.
+3. Enter a **Request name** for the request, such as *GET Data Upload Metadata*.
-5. Select the **GET** HTTP method.
+4. . Select the **GET** HTTP method.
-6. Enter the `resource Location URL` you copied in [Check Drawing package upload status](#check-the-drawing-package-upload-status). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `resource Location URL` you copied in [Check Drawing package upload status](#check-the-drawing-package-upload-status). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Body** tab. The metadata should like the following JSON fragment:
+7. In the response window, select the **Body** tab. The metadata should like the following JSON fragment:
```json {
Now that the Drawing package is uploaded, we'll use the `udid` for the uploaded
To convert a Drawing package:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *POST Convert Drawing Package*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **POST** HTTP method.
+4. Select the **POST** HTTP method.
-6. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded package):
+5. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded package):
```http https://us.atlas.microsoft.com/conversions?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0&udid={udid}&inputType=DWG&outputOntology=facility-2.0 ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab.
+7. In the response window, select the **Headers** tab.
-9. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the conversion.
+8. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the conversion.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="Copy the value of the location key for drawing package.":::
After the conversion operation completes, it returns a `conversionId`. We can ac
To check the status of the conversion process and retrieve the `conversionId`:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *GET Conversion Status*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **GET** HTTP method:
+4. Select the **GET** HTTP method:
-6. Enter the `status URL` you copied in [Convert a Drawing package](#convert-a-drawing-package). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied in [Convert a Drawing package](#convert-a-drawing-package). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://us.atlas.microsoft.com/conversions/operations/<operationId>?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab.
+7. In the response window, select the **Headers** tab.
-9. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which can be used by other APIs to access the converted map data.
+8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which can be used by other APIs to access the converted map data.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="Copy the conversion ID.":::
A dataset is a collection of map features, such as buildings, levels, and rooms.
To create a dataset:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *POST Dataset Create*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **POST** HTTP method.
+4. Select the **POST** HTTP method.
-6. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{conversionId`} with the `conversionId` obtained in [Check Drawing package conversion status](#check-the-drawing-package-conversion-status)):
+5. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{conversionId`} with the `conversionId` obtained in [Check Drawing package conversion status](#check-the-drawing-package-conversion-status)):
```http https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab.
+7. In the response window, select the **Headers** tab.
-9. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the dataset.
+8. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the dataset.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-dataset-location-url.png" border="true" alt-text="Copy the value of the location key for dataset.":::
To create a dataset:
To check the status of the dataset creation process and retrieve the `datasetId`:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *GET Dataset Status*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **GET** HTTP method.
+4. Select the **GET** HTTP method.
-6. Enter the `status URL` you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://us.atlas.microsoft.com/datasets/operations/<operationId>?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the `resource location URL`. The `resource location URL` contains the unique identifier (`datasetId`) of the dataset.
+7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the `resource location URL`. The `resource location URL` contains the unique identifier (`datasetId`) of the dataset.
-9. Copy the `datasetId`, because you'll use it in the next sections of this tutorial.
+8. Copy the `datasetId`, because you'll use it in the next sections of this tutorial.
:::image type="content" source="./media/tutorial-creator-indoor-maps/dataset-id.png" alt-text="Copy the dataset ID.":::
A tileset is a set of vector tiles that render on the map. Tilesets are created
To create a tileset:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *POST Tileset Create*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **POST** HTTP method.
+4. Select the **POST** HTTP method.
-6. Enter the following URL to the [Tileset API](/rest/api/maps/v2/tileset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key), and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status):
+5. Enter the following URL to the [Tileset API](/rest/api/maps/v2/tileset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key), and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status):
```http https://us.atlas.microsoft.com/tilesets?api-version=2.0&datasetID={datasetId}&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab.
+7. In the response window, select the **Headers** tab.
-9. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the tileset.
+8. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the tileset.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-tileset-location-url.png" border="true" alt-text="Copy the value of the tileset status url.":::
To create a tileset:
To check the status of the dataset creation process and retrieve the `tilesetId`:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *GET Tileset Status*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **GET** HTTP method.
+4. Select the **GET** HTTP method.
-6. Enter the `status URL` you copied in [Create a tileset](#create-a-tileset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied in [Create a tileset](#create-a-tileset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://us.atlas.microsoft.com/tilesets/operations/<operationId>?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select **Send**.
+6. Select **Send**.
-8. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the `resource location URL`. The `resource location URL` contains the unique identifier (`tilesetId`) of the dataset.
+7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the `resource location URL`. The `resource location URL` contains the unique identifier (`tilesetId`) of the dataset.
:::image type="content" source="./media/tutorial-creator-indoor-maps/tileset-id.png" alt-text="Copy the tileset ID.":::
Datasets can be queried using [WFS API](/rest/api/maps/v2/wfs). You can use the
To query the all collections in your dataset:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *GET Dataset Collections*.
-4. Select the collection you previously created, and then select **Save**.
+4. Select the **GET** HTTP method.
-5. Select the **GET** HTTP method.
-
-6. Enter the following URL to [WFS API](/rest/api/maps/v2/wfs). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key), and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status):
+5. Enter the following URL to [WFS API](/rest/api/maps/v2/wfs). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key), and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status):
```http https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0 ```
-7. Select **Send**.
+6. Select **Send**.
-8. The response body is returned in GeoJSON format and contains all collections in the dataset. For simplicity, the example here only shows the `unit` collection. To see an example that contains all collections, see [WFS Describe Collections API](/rest/api/maps/v2/wfs/collection-description). To learn more about any collection, you can select any of the URLs inside the `link` element.
+7. The response body is returned in GeoJSON format and contains all collections in the dataset. For simplicity, the example here only shows the `unit` collection. To see an example that contains all collections, see [WFS Describe Collections API](/rest/api/maps/v2/wfs/collection-description). To learn more about any collection, you can select any of the URLs inside the `link` element.
```json {
In this section, we'll query [WFS API](/rest/api/maps/v2/wfs) for the `unit` fea
To query the unit collection in your dataset:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *GET Unit Collection*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **GET** HTTP method.
+4. Select the **GET** HTTP method.
-6. Enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status)):
+5. Enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status)):
```http https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0 ```
-7. Select **Send**.
+6. Select **Send**.
-8. After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". In this tutorial, we'll use "UNIT26" as our feature `id` in the next section.
+7. After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". In this tutorial, we'll use "UNIT26" as our feature `id` in the next section.
```json {
Feature statesets define dynamic properties and values on specific features that
To create a stateset:
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *POST Create Stateset*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **POST** HTTP method.
+4. Select the **POST** HTTP method.
-6. Enter the following URL to the [Stateset API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status)):
+5. Enter the following URL to the [Stateset API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status)):
```http https://us.atlas.microsoft.com/featurestatesets?api-version=2.0&datasetId={datasetId}&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select the **Headers** tab.
+6. Select the **Headers** tab.
-8. In the **KEY** field, select `Content-Type`.
+7. In the **KEY** field, select `Content-Type`.
-9. In the **VALUE** field, select `application/json`.
+8. In the **VALUE** field, select `application/json`.
:::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="Header tab information for stateset creation.":::
-10. Select the **Body** tab.
+9. Select the **Body** tab.
-11. In the dropdown lists, select **raw** and **JSON**.
+10. In the dropdown lists, select **raw** and **JSON**.
-12. Copy the following JSON styles, and then paste them in the **Body** window:
+11. Copy the following JSON styles, and then paste them in the **Body** window:
```json {
To create a stateset:
} ```
-13. Select **Send**.
+12. Select **Send**.
-14. After the response returns successfully, copy the `statesetId` from the response body. In the next section, we'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26".
+13. After the response returns successfully, copy the `statesetId` from the response body. In the next section, we'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26".
:::image type="content" source="./media/tutorial-creator-indoor-maps/response-stateset-id.png"alt-text="Stateset ID response.":::
To create a stateset:
To update the `occupied` state of the unit with feature `id` "UNIT26":
-1. Select **New**.
+1. In the Postman app, select **New**..
2. In the **Create New** window, select **HTTP Request**. 3. Enter a **Request name** for the request, such as *PUT Set Stateset*.
-4. Select the collection you previously created, and then select **Save**.
-
-5. Select the **PUT** HTTP method.
+4. Select the **PUT** HTTP method.
-6. Enter the following URL to the [Feature Statesets API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{statesetId`} with the `statesetId` obtained in [Create a feature stateset](#create-a-feature-stateset)):
+5. Enter the following URL to the [Feature Statesets API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{statesetId`} with the `statesetId` obtained in [Create a feature stateset](#create-a-feature-stateset)):
```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-7. Select the **Headers** tab.
+6. Select the **Headers** tab.
-8. In the **KEY** field, select `Content-Type`.
+7. In the **KEY** field, select `Content-Type`.
-9. In the **VALUE** field, select `application/json`.
+8. In the **VALUE** field, select `application/json`.
:::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="Header tab information for stateset creation.":::
-10. Select the **Body** tab.
+9. Select the **Body** tab.
-11. In the dropdown lists, select **raw** and **JSON**.
+10. In the dropdown lists, select **raw** and **JSON**.
-12. Copy the following JSON style, and then paste it in the **Body** window:
+11. Copy the following JSON style, and then paste it in the **Body** window:
```json {
To update the `occupied` state of the unit with feature `id` "UNIT26":
>[!NOTE] > The update will be saved only if the time posted stamp is after the time stamp of the previous request.
-13. Select **Send**.
+12. Select **Send**.
-14. After the update completes, you'll receive a `200 OK` HTTP status code. If you implemented [dynamic styling](indoor-map-dynamic-styling.md) for an indoor map, the update displays at the specified time stamp in your rendered map.
+13. After the update completes, you'll receive a `200 OK` HTTP status code. If you implemented [dynamic styling](indoor-map-dynamic-styling.md) for an indoor map, the update displays at the specified time stamp in your rendered map.
You can use the [Feature Get Stateset API](/rest/api/maps/v2/feature-state/get-states) to retrieve the state of a feature using its feature `id`. You can also use the [Feature State Delete State API](/rest/api/maps/v2/feature-state/delete-stateset) to delete the stateset and its resources.
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-geofence.md
Title: 'Tutorial: Create a geofence and track devices on a Microsoft Azure Map'
description: Tutorial on how to set up a geofence. See how to track devices relative to the geofence by using the Azure Maps Spatial service Previously updated : 8/20/2020 Last updated : 7/06/2021
# Tutorial: Set up a geofence by using Azure Maps
-This tutorial walks you through the basics of creating and using Azure Maps geofence services. You'll do this in the context of the following scenario:
+This tutorial walks you through the basics of creating and using Azure Maps geofence services.
+
+Consider the following scenario:
*A construction site manager must track equipment as it enters and leaves the perimeters of a construction area. Whenever a piece of equipment exits or enters these perimeters, an email notification is sent to the operations manager.*
-Azure Maps provides a number of services to support the tracking of equipment entering and exiting the construction area. In this tutorial, you:
+Azure Maps provides a number of services to support the tracking of equipment entering and exiting the construction area. In this tutorial, you will:
> [!div class="checklist"] > * Upload [Geofencing GeoJSON data](geofence-geojson.md) that defines the construction site areas you want to monitor. You'll use the [Data Upload API](/rest/api/maps/data-v2/upload-preview) to upload geofences as polygon coordinates to your Azure Maps account.
Azure Maps provides a number of services to support the tracking of equipment en
1. [Create an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account). 2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
-This tutorial uses the [Postman](https://www.postman.com/) application, but you can choose a different API development environment.
+This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment.
## Upload geofencing GeoJSON data
-For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCollection`. The `FeatureCollection` contains two geofences that define polygonal areas within the construction site. The first geofence has no time expiration or restrictions. The second one can only be queried against during business hours (9:00 AM-5:00 PM in the Pacific Time zone), and will no longer be valid after January 1, 2022. For more information on the GeoJSON format, see [Geofencing GeoJSON data](geofence-geojson.md).
+In this tutorial, you''ll upload geofencing GeoJSON data that contains a `FeatureCollection`. The `FeatureCollection` contains two geofences that define polygonal areas within the construction site. The first geofence has no time expiration or restrictions. The second can only be queried against during business hours (9:00 AM-5:00 PM in the Pacific Time zone), and will no longer be valid after January 1, 2022. For more information on the GeoJSON format, see [Geofencing GeoJSON data](geofence-geojson.md).
>[!TIP] >You can update your geofencing data at any time. For more information, see [Data Upload API](/rest/api/maps/data-v2/upload-preview).
-1. Open the Postman app. Near the top, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+To upload the geofencing GeoJSON data:
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *POST GeoJSON Data Upload*.
-2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofencing data to Azure Maps. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+4. Select the **POST** HTTP method.
+
+5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0&dataFormat=geojson
For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCo
The `geojson` parameter in the URL path represents the data format of the data being uploaded.
-3. Select the **Body** tab. Select **raw**, and then **JSON** as the input format. Copy and paste the following GeoJSON data into the **Body** text area:
+6. Select the **Body** tab.
+
+7. In the dropdown lists, select **raw** and **JSON**.
+
+8. Copy the following GeoJSON data, and then paste it in the **Body** window:
```JSON {
For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCo
} ```
-4. Select **Send**, and wait for the request to process. When the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
+9. Select **Send**.
+
+10. In the response window, select the **Headers** tab.
+
+11. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the GeoJSON data upload.
```http https://us.atlas.microsoft.com/mapData/operations/<operationId>?api-version=2.0 ```
-5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
+### Check the GeoJSON data upload status
+
+To check the status of the GeoJSON data and retrieve its unique ID (`udid`):
+
+1. Select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data](#upload-geofencing-geojson-data). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```HTTP https://us.atlas.microsoft.com/mapData/<operationId>?api-version=2.0&subscription-key={Subscription-key} ```
-6. When the request completes successfully, select the **Headers** tab in the response window. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Save the `udid` to query the Get Geofence API in the last section of this tutorial. Optionally, you can use the `resource location URL` to retrieve metadata from this resource in the next step.
+6. Select **Send**.
+
+7. In the response window, select the **Headers** tab.
+
+8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Save the `udid` to query the Get Geofence API in the last section of this tutorial.
:::image type="content" source="./media/tutorial-geofence/resource-location-url.png" alt-text="Copy the resource location URL.":::
-7. To retrieve content metadata, create a **GET** HTTP request on the `resource location URL` that was retrieved in step 7. Make sure to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
+### (Optional) Retrieve GeoJSON data metadata
+
+You can retrieve metadata from the uploaded data. The metadata contains information like the resource location URL, creation date, updated date, size, and upload status.
+
+To retrieve content metadata:
+
+1. Select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET Data Upload Metadata*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status](#check-the-geojson-data-upload-status). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-8. When the request completes successfully, select the **Headers** tab in the response window. The metadata should like the following JSON fragment:
+6. In the response window, select the **Body** tab. The metadata should like the following JSON fragment:
```json {
For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCo
## Create workflows in Azure Logic Apps
-Next, you create two [logic app](../event-grid/handler-webhooks.md#logic-apps) endpoints that trigger an email notification. Here's how to create the first one:
+Next, we'll create two [logic app](../event-grid/handler-webhooks.md#logic-apps) endpoints that trigger an email notification.
+
+To create the logic apps:
1. Sign in to the [Azure portal](https://portal.azure.com).
Next, you create two [logic app](../event-grid/handler-webhooks.md#logic-apps) e
3. In the **Search the Marketplace** box, type **Logic App**.
-4. From the results, select **Logic App** > **Create**.
+4. From the results, select **Logic App**. Then, select **Create**.
5. On the **Logic App** page, enter the following values: * The **Subscription** that you want to use for this logic app.
Next, you create two [logic app](../event-grid/handler-webhooks.md#logic-apps) e
:::image type="content" source="./media/tutorial-geofence/logic-app-create.png" alt-text="Screenshot of create a logic app.":::
-6. Select **Review + Create**. Review your settings and select **Create** to submit the deployment. When the deployment successfully completes, select **Go to resource**. You're taken to **Logic App Designer**.
+6. Select **Review + Create**. Review your settings and select **Create**.
+
+7. When the deployment completes successfully, select **Go to resource**.
-7. Select a trigger type. Scroll down to the **Start with a common trigger** section. Select **When an HTTP request is received**.
+8. In the **Logic App Designer**, scroll down to the **Start with a common trigger** section. Select **When an HTTP request is received**.
:::image type="content" source="./media/tutorial-geofence/logic-app-trigger.png" alt-text="Screenshot of create a logic app HTTP trigger.":::
-8. In the upper-right corner of Logic App Designer, select **Save**. The **HTTP POST URL** is automatically generated. Save the URL. You need it in the next section to create an event endpoint.
+9. In the upper-right corner of Logic App Designer, select **Save**. The **HTTP POST URL** is automatically generated. Save the URL. You need it in the next section to create an event endpoint.
:::image type="content" source="./media/tutorial-geofence/logic-app-httprequest.png" alt-text="Screenshot of Logic App HTTP Request URL and JSON.":::
-9. Select **+ New Step**. Now you'll choose an action. Type `outlook.com email` in the search box. In the **Actions** list, scroll down and select **Send an email (V2)**.
+10. Select **+ New Step**.
+
+11. In the search box, type `outlook.com email`. In the **Actions** list, scroll down and select **Send an email (V2)**.
:::image type="content" source="./media/tutorial-geofence/logic-app-designer.png" alt-text="Screenshot of create a logic app designer.":::
-10. Sign in to your Outlook account. Make sure to select **Yes** to allow the logic app to access the account. Fill in the fields for sending an email.
+12. Sign in to your Outlook account. Make sure to select **Yes** to allow the logic app to access the account. Fill in the fields for sending an email.
:::image type="content" source="./media/tutorial-geofence/logic-app-email.png" alt-text="Screenshot of create a logic app send email step."::: >[!TIP] > You can retrieve GeoJSON response data, such as `geometryId` or `deviceId`, in your email notifications. You can configure Logic Apps to read the data sent by Event Grid. For information on how to configure Logic Apps to consume and pass event data into email notifications, see [Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps](../event-grid/publish-iot-hub-events-to-logic-apps.md).
-11. In the upper-left corner of Logic App Designer, select **Save**.
+13. In the upper-left corner of **Logic App Designer**, select **Save**.
-To create a second logic app to notify the manager when equipment exits the construction site, repeat steps 3-11. Name the logic app `Equipment-Exit`.
+14. To create a second logic app to notify the manager when equipment exits the construction site, repeat the same process. Name the logic app `Equipment-Exit`.
## Create Azure Maps events subscriptions
-Azure Maps supports [three event types](../event-grid/event-schema-azure-maps.md). Here, you need to create two different event subscriptions: one for geofence enter events, and one for geofence exit events.
+Azure Maps supports [three event types](../event-grid/event-schema-azure-maps.md). In this tutorial, we'll create subscriptions to the following two events:
+
+* Geofence enter events
+* Geofence exit events
+
+To create an geofence exit and enter event subscription:
+
+1. In your Azure Maps account, select **Subscriptions**.
-The following steps show how to create an event subscription for the geofence enter events. You can subscribe to geofence exit events by repeating the steps in a similar manner.
+2. Select your subscription name.
-1. Go to your Azure Maps account. In the dashboard, select **Subscriptions**. Select your subscription name, and select **events** from the settings menu.
+3. In the settings menu, select **events**.
:::image type="content" source="./media/tutorial-geofence/events-tab.png" alt-text="Screenshot of go to Azure Maps account events.":::
-2. To create an event subscription, select **+ Event Subscription** from the events page.
+4. In the events page, Select **+ Event Subscription**.
:::image type="content" source="./media/tutorial-geofence/create-event-subscription.png" alt-text="Screenshot of create an Azure Maps events subscription.":::
-3. On the **Create Event Subscription** page, enter the following values:
+5. On the **Create Event Subscription** page, enter the following values:
* The **Name** of the event subscription. * The **Event Schema** should be *Event Grid Schema*. * The **System Topic Name** for this event subscription, which in this case is `Contoso-Construction`.
The following steps show how to create an event subscription for the geofence en
:::image type="content" source="./media/tutorial-geofence/events-subscription.png" alt-text="Screenshot of Azure Maps events subscription details.":::
-4. Select **Create**.
+6. Select **Create**.
-Repeat steps 1-4 for the logic app exit endpoint that you created in the previous section. On step 3, make sure to choose `Geofence Exited` as the event type.
+7. Repeat the same process for the geofence exit event. Make sure to choose `Geofence Exited` as the event type.
## Use Spatial Geofence Get API
-Use [Spatial Geofence Get API](/rest/api/maps/spatial/getgeofence) to send email notifications to the operations manager when a piece of equipment enters or exits the geofences.
+Next, we'll use the [Spatial Geofence Get API](/rest/api/maps/spatial/getgeofence) to send email notifications to the operations manager when a piece of equipment enters or exits the geofences.
Each piece of equipment has a `deviceId`. In this tutorial, you're tracking a single piece of equipment, with a unique ID of `device_1`.
Each of the following sections makes API requests by using the five different lo
### Equipment location 1 (47.638237,-122.132483)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 1*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *Location 1*.
+
+4. Select the **GET** HTTP method.
-2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
+5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit ```
-3. Select **Send**. The following GeoJSON appears in the response window.
+6. Select **Send**.
+
+7. The response should like the following GeoJSON fragment:
```json {
In the preceding GeoJSON response, the negative distance from the main site geof
### Location 2 (47.63800,-122.132531)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 2*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *Location 2*.
-2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={subscription-key}&api-version=1.0&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit ```
-3. Select **Send**. The following GeoJSON appears in the response window:
+6. Select **Send**.
+
+7. The response should like the following GeoJSON fragment:
```json {
In the preceding GeoJSON response, the equipment has remained in the main site g
### Location 3 (47.63810783315048,-122.13336020708084)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 3*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
-2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
+3. Enter a **Request name** for the request, such as *Location 3*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit ```
-3. Select **Send**. The following GeoJSON appears in the response window:
+6. Select **Send**.
+
+7. The response should like the following GeoJSON fragment:
```json {
In the preceding GeoJSON response, the equipment has remained in the main site g
### Location 4 (47.637988,-122.1338344)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 4*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *Location 4*.
+
+4. Select the **GET** HTTP method.
-2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
+5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit ```
-3. Select **Send**. The following GeoJSON appears in the response window:
+6. Select **Send**.
+
+7. The response should like the following GeoJSON fragment:
```json {
In the preceding GeoJSON response, the equipment has remained in the main site g
### Location 5 (47.63799, -122.134505)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 5*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *Location 5*.
-2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit ```
-3. Select **Send**. The following GeoJSON appears in the response window:
+6. Select **Send**.
+
+7. The response should like the following GeoJSON fragment:
```json {
In the preceding GeoJSON response, the equipment has remained in the main site g
In the preceding GeoJSON response, the equipment has exited the main site geofence. As a result, the `isEventPublished` parameter is set to `true`, and the operations manager receives an email notification indicating that the equipment has exited a geofence. - You can also [Send email notifications using Event Grid and Logic Apps](../event-grid/publish-iot-hub-events-to-logic-apps.md) and check [Supported Events Handlers in Event Grid](../event-grid/event-handlers.md) using Azure Maps. ## Clean up resources
azure-maps Weather Services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-faq.md
- Title: Microsoft Azure Maps Weather services frequently asked questions (FAQ)
-description: Find answer to common questions about Azure Maps Weather services data and features.
-- Previously updated : 06/23/2021------
-# Azure Maps Weather services frequently asked questions (FAQ)
-
-This article answers to common questions about Azure Maps [Weather services](/rest/api/maps/weather) data and features. The following topics are covered:
-
-* Data sources and data models
-* Weather services coverage and availability
-* Data update frequency
-* Developing with Azure Maps SDKs
-* Options to visualize weather data, including Microsoft Power BI integration
-
-## Data sources and data models
-
-**How does Azure Maps source Weather data?**
-
-Azure Maps is built with the collaboration of world-class mobility and location technology partners, including AccuWeather, who provides the underlying weather data. To read the announcement of Azure Map's collaboration with AccuWeather, see [Rain or shine: Azure Maps Weather Services will bring insights to your enterprise](https://azure.microsoft.com/blog/rain-or-shine-azure-maps-weather-services-will-bring-insights-to-your-enterprise/).
-
-AccuWeather has real-time weather and environmental information available anywhere in the world, largely because of their partnerships with many national governmental weather agencies and other proprietary arrangements. A list of this foundational information is provided below.
-
-* Publicly available global surface observations from government agencies
-* Proprietary surface observation datasets from governments and private companies
-* High-resolution radar data for over 40 countries/regions
-* Best-in-class real-time global lightning data
-* Government-issued weather warnings for over 60 countries/regions and territories
-* Satellite data from geostationary weather satellites covering the entire world
-* Over 150 numerical forecast models including internal, proprietary modeling, government models such as the U.S. Global Forecast System (GFS), and unique downscaled models provided by private companies
-* Air quality observations
-* Observations from departments of transportation
-
-Tens of thousands of surface observations, along with other data, are incorporated to create and influence the current conditions made available to users. These surface observations include not only freely available standard datasets, but also unique observations obtained from national meteorological services in many countries/regions, such as India, Brazil, Canada, and other proprietary inputs. These unique datasets increase the spatial and temporal resolution of current condition data for our users.
-
-These datasets are reviewed in real time for accuracy for the Digital Forecast System, which uses AccuWeatherΓÇÖs proprietary artificial intelligence algorithms to continuously modify the forecasts, ensuring they always incorporate the latest data and, in that way, maximize their continual accuracy.
-
-**What models create weather forecast data?**
-
-Many weather forecast guidance systems are used to formulate global forecasts. Over 150 numerical forecast models are used each day, both external and internal datasets. These models include government models such as the European Centre ECMWF and the U.S. Global Forecast System (GFS). Also, AccuWeather incorporates proprietary high-resolution models that downscale forecasts to specific locations and strategic regional domains to predict weather with further accuracy. AccuWeatherΓÇÖs unique blending and weighting algorithms have been developed over the last several decades. These algorithms optimally apply the many forecast inputs to provide highly accurate forecasts.
-
-## Weather services coverage and availability
-
-**What kind of coverage can I expect for different countries/regions?**
-
-Weather service coverage varies by country/region. All features aren't available in every country/region. For more information, see [coverage documentation](./weather-coverage.md).
-
-## Data update frequency
-
-**How often is Current Conditions data updated?**
-
-Current Conditions data is updated at least once an hour, but can be updated more frequently with rapidly changing conditions ΓÇô such as large temperature changes, sky conditions changes, precipitation changes, and so on. Most observation stations around the world report many times per hour as conditions change. However, a few areas will still only update once, twice, or four times an hour at scheduled intervals.
-
-Azure Maps caches the Current Conditions data for up to 10 minutes to help capture the near real-time update frequency of the data as it occurs. To see when the cached response expires and avoid displaying outdated data, you can use the Expires Header information in the HTTP header of the Azure Maps API response.
-
-**How often is Daily and Hourly Forecast data updated?**
-
-Daily and Hourly Forecast data is updated multiple times per day, as updated observations are received. For example, if a forecasted high/low temperature is surpassed, our Forecast data will adjust at the next update cycle. Updates happen at different intervals but typically happens within an hour. Many sudden weather conditions may cause a forecast data change. For example, on a hot summer afternoon, an isolated thunderstorm might suddenly emerge, bringing heavy cloud coverage and rain. The isolated storm could effectively drop temperature by as much as 10 degrees. This new temperature value will impact the Hourly and Daily Forecasts for the rest of the day, and as such, will be updated in our datasets.
-
-Azure Maps Forecast APIs are cached for up to 30 mins. To see when the cached response expires and avoid displaying outdated data, you can look at the Expires Header information in the HTTP header of the Azure Maps API response. We recommend updating as necessary based on a specific product use case and UI (user interface).
-
-## Developing with Azure Maps SDKs
-
-**Does Azure Maps Web SDK natively support Weather services integration?**
-
-The Azure Maps Web SDK provides a services module. The services module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications. by using JavaScript or TypeScript. To get started, see our [documentation](./how-to-use-services-module.md).
-
-**Does Azure Maps Android SDK natively support Weather services integration?**
-
-The Azure Maps Android SDKs supports Mercator tile layers, which can have x/y/zoom notation, quad key notation, or EPSG 3857 bounding box notation.
-
-We plan to create a services module for Java/Android similar to the web SDK module. The Android services module will make it easy to access all Azure Maps services in a Java or Android app.
-
-## Data visualizations
-
-**Does Azure Maps Power BI Visual support Azure Maps weather tiles?**
-
-Yes. To learn how to migrate radar and infrared satellite tiles to the Microsoft Power BI visual, see [Add a tile layer to Power BI visual](./power-bi-visual-add-tile-layer.md).
-
-**How do I interpret colors used for radar and satellite tiles?**
-
-The Azure Maps [Weather concept article](./weather-services-concepts.md#radar-and-satellite-imagery-color-scale) includes a guide to help interpret colors used for radar and satellite tiles. The article covers color samples and HEX color codes.
-
-**Can I create radar and satellite tile animations?**
-
-Yes. In addition to real-time radar and satellite tiles, Azure Maps customers can request past and future tiles to enhance data visualizations with map overlays. Customers can call the [Get Map Tile v2 API](/rest/api/maps/renderv2/getmaptilepreview) or request tiles via Azure Maps web SDK. Radar tiles are available for up to 1.5 hours in the past, and for up to 2 hours in the future. The tiles are available in 5-minute intervals. Infrared tiles are provided for up to 3 hours in the past, and are available in 10-minute intervals. For more information, see the open-source Weather Tile Animation [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Animated%20tile%20layer).
-
-**Do you offer icons for different weather conditions?**
-
-Yes. You can find icons and their respective codes [here](./weather-services-concepts.md#weather-icons). Notice that only some of the Weather service (Preview) APIs, such as [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions), return the *iconCode* in the response. For more information, see the Current WeatherConditions open-source [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Get%20current%20weather%20at%20a%20location).
-
-## Next steps
-
-If this FAQ doesnΓÇÖt answer your question, you can contact us through the following channels (in escalating order):
-
-* The comments section of this article.
-* [MSFT Q&A page for Azure Maps](/answers/topics/azure-maps.html).
-* Microsoft Support. To create a new support request, in the [Azure portal](https://portal.azure.com/), on the Help tab, select the **Help +** support button, and then select **New support request**.
-* [Azure Maps UserVoice](https://feedback.azure.com/forums/909172-azure-maps) to submit feature requests.
-
-Learn how to request real-time and forecasted weather data using Azure Maps Weather
-> [!div class="nextstepaction"]
-> [Request Real-time weather data ](how-to-request-weather-data.md)
-
-Azure Maps Weather services concepts article:
-> [!div class="nextstepaction"]
-> [Weather services concepts](weather-coverage.md)
-
-Explore the Azure Maps Weather services API documentation:
-
-> [!div class="nextstepaction"]
-> [Azure Maps Weather services](/rest/api/maps/weather)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
Azure Monitor agent uses [Data Collection Rules (DCR)](data-collection-rule-over
## Should I switch to Azure Monitor agent? Azure Monitor agent coexists with the [generally available agents for Azure Monitor](agents-overview.md), but you may consider transitioning your VMs off the current agents during the Azure Monitor agent public preview period. Consider the following factors when making this determination. -- **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today latest operating systems and future environment support such as new operating system versions and types of networking requirements will most likely be provided only in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it.
+- **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environemnt support, and networking requirements will most likely be provided in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it.
- **Current and new feature requirements.** Azure Monitor agent introduces several new capabilities such as filtering, scoping, and multi-homing, but it isnΓÇÖt at parity yet with the current agents for other functionality such as custom log collection and integration with all solutions ([see solutions in preview](/azure/azure-monitor/faq#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent)). Most new capabilities in Azure Monitor will only be made available with Azure Monitor agent, so over time more functionality will only be available in the new agent. Consider whether Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If Azure Monitor agent has all the core capabilities you require, then consider transitioning to it. If there are critical features that you require, then continue with the current agent until Azure Monitor agent reaches parity.-- **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date published for the Log Analytics agents in August 2021. The current agents will be supported for several years once deprecation begins.
+- **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date will be published for the Log Analytics agents in August 2021. The current agents will be supported for several years once deprecation begins.
## Supported resource types Azure virtual machines, virtual machine scale sets, and Azure Arc enabled servers are currently supported. Azure Kubernetes Service and other compute resource types are not currently supported.
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
See this alert stateless evaluation example:
| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE. | 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
-Stateful alerts fire once per incident and resolve. This feature is currently in preview in the Azure public cloud. You can set this using **Automatically resolve alerts** in the alert details section.
+Stateful alerts fire once per incident and resolve. The alert rule resolves when the alert condition isn't met for 30 minutes for a specific evaluation period (to account for log ingestion delay), and for three consecutive evaluations to reduce noise if there is flapping conditions. For example, with a frequency of 5 minutes, the alert resolve after 40 minutes or with a frequency of 1 minute, the alert resolve after 32 minutes. The resolved notification is sent out via web-hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
+
+Stateful alerts feature is currently in preview in the Azure public cloud. You can set this using **Automatically resolve alerts** in the alert details section.
## Location selection in log alerts
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-collectd.md
Take a copy of the instrumentation key, which identifies the resource.
On your Linux server machines: 1. Install [collectd](https://collectd.org/) version 5.4.0 or later.
-2. Download the [Application Insights collectd writer plugin](https://github.com/microsoft/ApplicationInsights-Java/tree/master/core/src/main/java/com/microsoft/applicationinsights/internal). Note the version number.
+2. Download the [Application Insights collectd writer plugin](https://github.com/microsoft/ApplicationInsights-Java/tree/main/agent/agent-tooling/src/main/java/com/microsoft/applicationinsights/agent/internal). Note the version number.
3. Copy the plugin JAR into `/usr/share/collectd/java`. 4. Edit `/etc/collectd/collectd.conf`: * Ensure that [the Java plugin](https://collectd.org/wiki/index.php/Plugin:Java) is enabled.
azure-monitor Java 2X Filter Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-2x-filter-telemetry.md
In ApplicationInsights.xml, add a `TelemetryProcessors` section like this exampl
```
-[Inspect the full set of built-in processors](https://github.com/microsoft/ApplicationInsights-Java/tree/master/core/src/main/java/com/microsoft/applicationinsights/internal).
+[Inspect the full set of built-in processors](https://github.com/microsoft/ApplicationInsights-Java/tree/main/agent/agent-tooling/src/main/java/com/microsoft/applicationinsights/agent/internal).
## Built-in filters
azure-monitor Container Insights Log Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-log-search.md
Perf
InsightsMetrics | where Name == "requests_count" | summarize Val=any(Val) by TimeGenerated=bin(TimeGenerated, 1m)
-| sort by TimeGenerated asc<br> &#124; project RequestsPerMinute = Val - prev(Val), TimeGenerated
+| sort by TimeGenerated asc
+| project RequestsPerMinute = Val - prev(Val), TimeGenerated
| render barchart ``` ### Pods by name and namespace
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
For important additional information, see [Monitoring Agents Overview](../agents
|apiserver_current_inflight_requests|No|Inflight Requests|Count|Average|Maximum number of currently used inflight requests on the apiserver per request kind in the last second|requestKind| |cluster_autoscaler_cluster_safe_to_autoscale|No|Cluster Health|Count|Average|Determines whether or not cluster autoscaler will take action on the cluster|No Dimensions| |cluster_autoscaler_scale_down_in_cooldown|No|Scale Down Cooldown|Count|Average|Determines if the scale down is in cooldown - No nodes will be removed during this timeframe|No Dimensions|
-|cluster_autoscaler_unneeded_nodes_count|No|Unneeded Nodes|Count|Average|Cluster auotscaler marks those nodes as candidates for deletion and are eventually deleted|No Dimensions|
+|cluster_autoscaler_unneeded_nodes_count|No|Unneeded Nodes|Count|Average|Cluster autoscaler marks those nodes as candidates for deletion and are eventually deleted|No Dimensions|
|cluster_autoscaler_unschedulable_pods_count|No|Unschedulable Pods|Count|Average|Number of pods that are currently unschedulable in the cluster|No Dimensions| |kube_node_status_allocatable_cpu_cores|No|Total number of available cpu cores in a managed cluster|Count|Average|Total number of available cpu cores in a managed cluster|No Dimensions| |kube_node_status_allocatable_memory_bytes|No|Total amount of available memory in a managed cluster|Bytes|Average|Total amount of available memory in a managed cluster|No Dimensions|
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
Customer-Managed key is provided on dedicated cluster and these operations are r
- Behavior with Key Vault availability - In normal operation -- Storage caches AEK for short periods of time and goes back to Key Vault to unwrap periodically.
- - Key Vault connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for the duration of the availablility issue and this overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
+ - Key Vault connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for the duration of the availability issue and this overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
- Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-collector-api.md
To use the HTTP Data Collector API, you create a POST request that includes the
|: |: | | Authorization |The authorization signature. Later in the article, you can read about how to create an HMAC-SHA256 header. | | Log-Type |Specify the record type of the data that is being submitted. Can only contain letters, numbers, and underscore (_), and may not exceed 100 characters. |
-| x-ms-date |The date that the request was processed, in RFC 1123 format. |
+| x-ms-date |The date that the request was processed, in RFC 7234 format. |
| x-ms-AzureResourceId | Resource ID of the Azure resource the data should be associated with. This populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](./design-logs-deployment.md#access-mode) queries. If this field isn't specified, the data will not be included in resource-context queries. | | time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field then its contents are used for **TimeGenerated**. If this field isnΓÇÖt specified, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. |
public class ApiExample {
String stringToHash = String .join("\n", httpMethod, String.valueOf(json.getBytes(StandardCharsets.UTF_8).length), contentType, xmsDate , resource);
- String hashedString = getHMAC254(stringToHash, sharedKey);
+ String hashedString = getHMAC256(stringToHash, sharedKey);
String signature = "SharedKey " + workspaceId + ":" + hashedString; postData(signature, dateString, json);
public class ApiExample {
private static String getServerTime() { Calendar calendar = Calendar.getInstance();
- SimpleDateFormat dateFormat = new SimpleDateFormat(RFC_1123_DATE);
+ SimpleDateFormat dateFormat = new SimpleDateFormat(RFC_1123_DATE, Locale.US);
dateFormat.setTimeZone(TimeZone.getTimeZone("GMT")); return dateFormat.format(calendar.getTime()); }
public class ApiExample {
} }
- private static String getHMAC254(String input, String key) throws InvalidKeyException, NoSuchAlgorithmException {
+ private static String getHMAC256(String input, String key) throws InvalidKeyException, NoSuchAlgorithmException {
String hash;
- Mac sha254HMAC = Mac.getInstance("HmacSHA256");
+ Mac sha256HMAC = Mac.getInstance("HmacSHA256");
Base64.Decoder decoder = Base64.getDecoder(); SecretKeySpec secretKey = new SecretKeySpec(decoder.decode(key.getBytes(StandardCharsets.UTF_8)), "HmacSHA256");
- sha254HMAC.init(secretKey);
+ sha256HMAC.init(secretKey);
Base64.Encoder encoder = Base64.getEncoder();
- hash = new String(encoder.encode(sha254HMAC.doFinal(input.getBytes(StandardCharsets.UTF_8))));
+ hash = new String(encoder.encode(sha256HMAC.doFinal(input.getBytes(StandardCharsets.UTF_8))));
return hash; }
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/get-started-queries.md
SecurityEvent
| project Computer, TimeGenerated, EventDetails=Activity, EventCode=substring(Activity, 0, 4) ```
-**extend** keeps all original columns in the result set and defines additional ones. The following query uses **extend** to add the *EventCode* column. Note that this column may not display at the end of the table results in which case you would need to expand the details of a record to view it.
+**extend** keeps all original columns in the result set and defines additional ones. The following query uses **extend** to add the *EventCode* column. Note that this column may not display at the end of the table results, in which case you would need to expand the details of a record to view it.
```Kusto SecurityEvent
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-tutorial.md
Expand the **Log Management** solution and locate the **AppRequests** table. You
:::image type="content" source="media/log-analytics-tutorial/table-details.png" alt-text="Tables view" lightbox="media/log-analytics-tutorial/table-details.png":::
-Click **Learn more** to go to the table reference that documents each table and its columns. Click **Preview data** to have a quick look at a few recent records in the table. This can be useful to ensure that this is the data that you're expecting before you actually run a query with it.
+Click the link below **Useful links** to go to the table reference that documents each table and its columns. Click **Preview data** to have a quick look at a few recent records in the table. This can be useful to ensure that this is the data that you're expecting before you actually run a query with it.
:::image type="content" source="media/log-analytics-tutorial/sample-data.png" alt-text="Sample data" lightbox="media/log-analytics-tutorial/sample-data.png":::
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
Each workspace has its daily cap applied on a different hour of the day. The res
Soon after the daily limit is reached, the collection of billable data types stops for the rest of the day. Latency inherent in applying the daily cap means that the cap isn't applied at precisely the specified daily cap level. A warning banner appears across the top of the page for the selected Log Analytics workspace, and an operation event is sent to the *Operation* table under the **LogManagement** category. Data collection resumes after the reset time defined under *Daily limit will be set at*. We recommend defining an alert rule that's based on this operation event, configured to notify when the daily data limit is reached. For more information, see [Alert when daily cap is reached](#alert-when-daily-cap-is-reached) section. > [!NOTE]
-> The daily cap can't stop data collection as precisely as the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. For a query that is helpful in studying the daily cap behavior, see the [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) section in this article.
+> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. For a query that is helpful in studying the daily cap behavior, see the [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) section in this article.
> [!WARNING] > The daily cap doesn't stop the collection of data types **WindowsEvent**, **SecurityAlert**, **SecurityBaseline**, **SecurityBaselineSummary**, **SecurityDetection**, **SecurityEvent**, **WindowsFirewall**, **MaliciousIPCommunication**, **LinuxAuditLog**, **SysmonEvent**, **ProtectionStatus**, **Update**, and **UpdateSummary**, except for workspaces in which Azure Defender (Security Center) was installed before June 19, 2017.
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
So far we covered the configuration of your network, but you should also conside
Go to the Azure portal. In your resource's menu, there's a menu item called **Network Isolation** on the left-hand side. This page controls both which networks can reach the resource through a Private Link, and whether other networks can reach it or not. +
+> [!NOTE]
+> Starting August 16, 2021, Network Isolation will be strictly enforced. Resources set to block queries from public networks, and that aren't associated with an AMPLS, will stop accepting queries from any network.
+ ![LA Network Isolation](./media/private-link-security/ampls-log-analytics-lan-network-isolation-6.png) ### Connected Azure Monitor Private Link scopes
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-troubleshoot.md
If you still see a message that the virtual machine needs to be onboarded, it m
If you do not see the both extensions for your operating system in the list of installed extensions, then they need to be installed. If the extensions are listed but their status does not appear as *Provisioning succeeded*, then the extension should be removed and reinstalled. ### Do you have connectivity issues?
-For Windows machines, you can use the *TestCloudConnectivity* tool to identify connectivity issue. This tool is installed by default with the agent in the folder *%SystemRoot%\Program Files\Microsoft Monitoring Agent\Agent*. Run the tool from an elevated command prompt. It will return results and highlight where the test fails.
+For Windows machines, you can use the *TestCloudConnectivity* tool to identify connectivity issue. This tool is installed by default with the agent in the folder *%SystemDrive%\Program Files\Microsoft Monitoring Agent\Agent*. Run the tool from an elevated command prompt. It will return results and highlight where the test fails.
![TestCloudConnectivity tool](media/vminsights-troubleshoot/test-cloud-connectivity.png)
azure-sql Read Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/read-scale-out.md
Previously updated : 01/20/2021 Last updated : 07/06/2021 # Use read-only replicas to offload read-only query workloads [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
If you wish to ensure that the application connects to the primary replica regar
## Data consistency
-One of the benefits of replicas is that the replicas are always in the transactionally consistent state, but at different points in time there may be some small latency between different replicas. Read scale-out supports session-level consistency. It means, if the read-only session reconnects after a connection error caused by replica unavailability, it may be redirected to a replica that is not 100% up-to-date with the read-write replica. Likewise, if an application writes data using a read-write session and immediately reads it using a read-only session, it is possible that the latest updates are not immediately visible on the replica. The latency is caused by an asynchronous transaction log redo operation.
+Data changes made on the primary replica propagate to read-only replicas asynchronously. Within a session connected to a read-only replica, reads are always transactionally consistent. However, because data propagation latency is variable, different replicas can return data at slightly different points in time relative to the primary and each other. If a read-only replica becomes unavailable and the session reconnects, it may connect to a replica that is at a different point in time than the original replica. Likewise, if an application changes data using a read-write session and immediately reads it using a read-only session, it is possible that the latest changes are not immediately visible on the read-only replica.
+
+Typical data propagation latency between the primary replica and read-only replicas varies in the range from tens of milliseconds to single-digit seconds. However, there is no fixed upper bound on data propagation latency. Conditions such as high resource utilization on the replica can increase latency substantially. Applications that require guaranteed data consistency across sessions, or require committed data to be readable immediately should use the primary replica.
> [!NOTE]
-> Replication latencies within the region are low, and this situation is rare. To monitor replication latency, see [Monitoring and troubleshooting read-only replica](#monitoring-and-troubleshooting-read-only-replicas).
+> To monitor data propagation latency, see [Monitoring and troubleshooting read-only replica](#monitoring-and-troubleshooting-read-only-replicas).
## Connect to a read-only replica
Commonly used views are:
|:|:| |[sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database)| Provides resource utilization metrics for the last hour, including CPU, data IO, and log write utilization relative to service objective limits.| |[sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql)| Provides aggregate wait statistics for the database engine instance. |
-|[sys.dm_database_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-replica-states-azure-sql-database)| Provides replica health state and synchronization statistics. Redo queue size and redo rate serve as indicators of data latency on the read-only replica. |
+|[sys.dm_database_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-replica-states-azure-sql-database)| Provides replica health state and synchronization statistics. Redo queue size and redo rate serve as indicators of data propagation latency on the read-only replica. |
|[sys.dm_os_performance_counters](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-performance-counters-transact-sql)| Provides database engine performance counters.| |[sys.dm_exec_query_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql)| Provides per-query execution statistics such as number of executions, CPU time used, etc.| |[sys.dm_exec_query_plan()](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-plan-transact-sql)| Provides cached query plans. |
If a long-running query on a read-only replica causes this kind of blocking, it
> If you receive error 3961, 1219, or 3947 when running queries against a read-only replica, retry the query. > [!TIP]
-> In Premium and Business Critical service tiers, when connected to a read-only replica, the `redo_queue_size` and `redo_rate` columns in the [sys.dm_database_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-replica-states-azure-sql-database) DMV may be used to monitor data synchronization process, serving as indicators of data latency on the read-only replica.
+> In Premium and Business Critical service tiers, when connected to a read-only replica, the `redo_queue_size` and `redo_rate` columns in the [sys.dm_database_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-replica-states-azure-sql-database) DMV may be used to monitor data synchronization process, serving as indicators of data propagation latency on the read-only replica.
> ## Enable and disable read scale-out
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video.md
Next, browse to the src/cloud-to-device-console-app folder. Here you'll see the
} ```
-Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing - hence `maximumSamplesPerSecond` is set to 2.
+Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
+ ## Run the sample program
azure-video-analyzer Track Objects Live Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/track-objects-live-video.md
Open the URL for the pipeline topology in a browser, and examine the settings fo
} ```
-Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing - hence `maximumSamplesPerSecond` is set to 2.
+Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
## Run the sample program
azure-video-analyzer Use Intel Grpc Video Analytics Serving Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-intel-grpc-video-analytics-serving-tutorial.md
In the initial release of this inference server, you have access to the followin
- object_tracking for person_vehicle_bike_tracking ![object tracking for person vehicle](./media/use-intel-openvino-tutorial/object-tracking.png)
-It uses Pre-loaded Object Detection, Object Classification and Object Tracking pipelines to get started quickly. In addition it comes with pre-loaded [person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-vehicle-bike-detection-crossroad-0078/description/person-vehicle-bike-detection-crossroad-0078.md) and [vehicle-attributes-recognition-barrier-0039 models](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/vehicle-attributes-recognition-barrier-0039/description/vehicle-attributes-recognition-barrier-0039.md).
+It uses Pre-loaded Object Detection, Object Classification and Object Tracking pipelines to get started quickly. In addition it comes with pre-loaded [person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-vehicle-bike-detection-crossroad-0078/README.md) and [vehicle-attributes-recognition-barrier-0039 models](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/vehicle-attributes-recognition-barrier-0039/README.md).
> [!NOTE] > By downloading and using the Edge module: OpenVINOΓäó DL Streamer ΓÇô Edge AI Extension from Intel, and the included software, you agree to the terms and conditions under the [License Agreement](https://www.intel.com/content/www/us/en/legal/terms-of-use.html).
azure-video-analyzer Use Line Crossing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-line-crossing.md
Open the URL for the pipeline topology in a browser, and examine the settings fo
} ```
-Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing - hence `maximumSamplesPerSecond` is set to 2. This will effectively be 15 frames/sec.
+Here, `skipSamplesWithoutAnnotation` is set to `false` because the extension node needs to pass through all frames, whether or not they have inference results, to the downstream object tracker node. The object tracker is capable of tracking objects over 15 frames, approximately. If the live video is at a frame rate of 30 frames/sec, that means at least two frames in every second should be sent to the HTTP server for inferencing. Your AI model has a maximum FPS for processing, which is the highest value that `maximumSamplesPerSecond` should be set to.
Also look at the line crossing node parameter placeholders `linecrossingName` and `lineCoordinates`. We have provided default values for these parameters but you overwrite them using the operations.json file. Look at how we pass other parameters from the operations.json file to a topology (i.e. rtsp url).
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Previously updated : 05/06/2021 Last updated : 07/01/2021
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
* Bug fixes * Deprecated functionality
+## June 2021
+
+### Video Analyzer for Media deployed in six new regions
+
+You can now create a Video Analyzer for Media paid account in France Central, Central US, Brazil South, West Central US, Korea Central, and Japan West regions.
+
## May 2021 ### New source languages support for speech-to-text (STT), translation, and search
When indexing a video through our advanced video settings, you can view our new
The Video Indexer service was renamed to Azure Video Analyzer for Media.
+### Improved upload experience in the portal
+
+Video Analyzer for Media has a new upload experience in the [portal](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
+
+### New developer portal in available in gov-cloud
+
+[Video Analyzer for Media Developer Portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
+ ### Observed people tracing (preview) Azure Video Analyzer for Media now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
You can now see the detected acoustic events in the closed captions file. The fi
**Audio Effects Detection** (preview) component detects various acoustics events and classifies them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more). For more information, see [Audio effects detection](audio-effects-detection.md).
-### Improved upload experience in the portal
-
-Video Analyzer for Media has a new upload experience in the portal:
-
-* New developer portal in available in Fairfax
-
-Video Analyzer for Media new [Developer Portal](https://api-portal.videoindexer.ai), is now also available in Gov-cloud.
- ## March 2021 ### Audio analysis
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
+
+ Title: Back up Azure VMware Solution VMs with Azure Backup Server
+description: Configure your Azure VMware Solution environment to back up virtual machines by using Azure Backup Server.
+ Last updated : 02/04/2021++
+# Back up Azure VMware Solution VMs with Azure Backup Server
+
+In this article, we'll back up VMware virtual machines (VMs) running on Azure VMware Solution with Azure Backup Server. First, thoroughly go through [Set up Microsoft Azure Backup Server for Azure VMware Solution](set-up-backup-server-for-azure-vmware-solution.md).
+
+Then, we'll walk through all of the necessary procedures to:
+
+> [!div class="checklist"]
+> * Set up a secure channel so that Azure Backup Server can communicate with VMware servers over HTTPS.
+> * Add the account credentials to Azure Backup Server.
+> * Add the vCenter to Azure Backup Server.
+> * Set up a protection group that contains the VMware VMs you want to back up, specify backup settings, and schedule the backup.
+
+## Create a secure connection to the vCenter server
+
+By default, Azure Backup Server communicates with VMware servers over HTTPS. To set up the HTTPS connection, download the VMware certificate authority (CA) certificate and import it on the Azure Backup Server.
+
+### Set up the certificate
+
+1. In the browser, on the Azure Backup Server machine, enter the vSphere Web Client URL.
+
+ > [!NOTE]
+ > If the VMware **Getting Started** page doesn't appear, verify the connection and browser proxy settings and try again.
+
+1. On the VMware **Getting Started** page, select **Download trusted root CA certificates**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/vsphere-web-client.png" alt-text="Screenshot showing the vSphere Web Client Getting Started window to access vSphere remotely.":::
+
+1. Save the **download.zip** file to the Azure Backup Server machine, and then extract its contents to the **certs** folder, which contains the:
+
+ - Root certificate file with an extension that begins with a numbered sequence like .0 and .1.
+ - CRL file with an extension that begins with a sequence like .r0 or .r1.
+
+1. In the **certs** folder, right-click the root certificate file and select **Rename** to change the extension to **.crt**.
+
+ The file icon changes to one that represents a root certificate.
+
+1. Right-click the root certificate, and select **Install Certificate**.
+
+1. In the **Certificate Import Wizard**, select **Local Machine** as the destination for the certificate, and select **Next**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/certificate-import-wizard1.png" alt-text="Screenshot showing the Certificate Import Wizard dialog with Local Machine selected.":::
+
+ > [!NOTE]
+ > If asked, confirm that you want to allow changes to the computer.
+
+1. Select **Place all certificates in the following store**, and select **Browse** to choose the certificate store.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/cert-import-wizard-local-store.png" alt-text="Screenshot showing the Certificate Store dialog with Place all certificates in the following store option selected.":::
+
+1. Select **Trusted Root Certification Authorities** as the destination folder, and select **OK**.
+
+1. Review the settings, and select **Finish** to start importing the certificate.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/cert-wizard-final-screen.png" alt-text="Screenshot showing the Certificate Import Wizard.":::
+
+1. After the certificate import is confirmed, sign in to the vCenter server to confirm that your connection is secure.
+
+### Enable TLS 1.2 on Azure Backup Server
+
+VMware 6.7 onwards had TLS enabled as the communication protocol.
+
+1. Copy the following registry settings, and paste them into Notepad. Then save the file as TLS.REG without the .txt extension.
+
+ ```text
+
+ Windows Registry Editor Version 5.00
+
+ [HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v2.0.50727]
+
+ "SystemDefaultTlsVersions"=dword:00000001
+
+ "SchUseStrongCrypto"=dword:00000001
+
+ [HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319]
+
+ "SystemDefaultTlsVersions"=dword:00000001
+
+ "SchUseStrongCrypto"=dword:00000001
+
+ [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727]
+
+ "SystemDefaultTlsVersions"=dword:00000001
+
+ "SchUseStrongCrypto"=dword:00000001
+
+ [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
+
+ "SystemDefaultTlsVersions"=dword:00000001
+
+ "SchUseStrongCrypto"=dword:00000001
+
+ ```
+
+1. Right-click the TLS.REG file, and select **Merge** or **Open** to add the settings to the registry.
++
+## Add the account on Azure Backup Server
+
+1. Open Azure Backup Server, and in the Azure Backup Server console, select **Management** > **Production Servers** > **Manage VMware**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-credentials.png" alt-text="Screenshot showing Microsoft Azure Backup console.":::
+
+1. In the **Manage Credentials** dialog box, select **Add**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/mabs-manage-credentials-dialog.png" alt-text="Screenshot showing Manage Credentials in Azure Backup Server.":::
+
+1. In the **Add Credential** dialog box, enter a name and a description for the new credential. Specify the user name and password you defined on the VMware server.
+
+ > [!NOTE]
+ > If the VMware server and Azure Backup Server aren't in the same domain, specify the domain in the **User name** box.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/mabs-add-credential-dialog2.png" alt-text="Screenshot showing the credential details in Azure Backup Server.":::
+
+1. Select **Add** to add the new credential.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/new-list-of-mabs-creds.png" alt-text="Screenshot showing the Azure Backup Server Manage Credentials dialog box with new credentials displayed.":::
+
+## Add the vCenter server to Azure Backup Server
+
+1. In the Azure Backup Server console, select **Management** > **Production Servers** > **Add**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vcenter-to-mabs.png" alt-text="Screenshot showing Microsoft Azure Backup console with the Add button selected.":::
+
+1. Select **VMware Servers**, and select **Next**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/production-server-add-wizard.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the VMware Servers option selected.":::
+
+1. Specify the IP address of the vCenter.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-server-provide-server-name.png" alt-text="Screenshot showing the Production Server Addition Wizard showing how to add a VMware vCenter or ESXi host server and its credentials.":::
+
+1. In the **SSL Port** box, enter the port used to communicate with the vCenter.
+
+ > [!TIP]
+ > Port 443 is the default port, but you can change it if your vCenter listens on a different port.
+
+1. In the **Specify Credential** box, select the credential that you created in the previous section.
+
+1. Select **Add** to add the vCenter to the servers list, and select **Next**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the VMware server and credentials defined.":::
+
+1. On the **Summary** page, select **Add** to add the vCenter to Azure Backup Server.
+
+ The new server gets added immediately. vCenter doesn't need an agent.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/tasks-screen.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the summary of the VMware server and credentials defined and the Add button selected.":::
+
+1. On the **Finish** page, review the settings, and then select **Close**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/summary-screen.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the summary of the VMware server and credentials added.":::
+
+ You see the vCenter server listed under **Production Server** with:
+ - Type as **VMware Server**
+ - Agent Status as **OK**
+
+ If you see **Agent Status** as **Unknown**, select **Refresh**.
+
+## Configure a protection group
+
+Protection groups gather multiple VMs and apply the same data retention and backup settings to all VMs in the group.
+
+1. In the Azure Backup Server console, select **Protection** > **New**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/open-protection-wizard.png" alt-text="Screenshot showing the Microsoft Azure Backup console with the New button selected to create a new Protection Group.":::
+
+1. On the **Create New Protection Group** wizard welcome page, select **Next**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/protection-wizard.png" alt-text="Screenshot showing the Protection Group Wizard.":::
+
+1. On the **Select Protection Group Type** page, select **Servers**, and then select **Next**. The **Select Group Members** page appears.
+
+1. On the **Select Group Members** page, select the VMs (or VM folders) that you want to back up, and then select **Next**.
+
+ > [!NOTE]
+ > When you select a folder or VMs, folders inside that folder are also selected for backup. You can uncheck folders or VMs you don't want to back up. If a VM or folder is already being backed up, you can't select it, which ensures duplicate recovery points aren't created for a VM.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/server-add-selected-members.png" alt-text="Screenshot showing the Create New Protection Group Wizard to select group members.":::
+
+1. On the **Select Data Protection Method** page, enter a name for the protection group and protection settings.
+
+1. Set the short-term protection to **Disk**, enable online protection, and then select **Next**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/name-protection-group.png" alt-text="Screenshot showing the Create New Protection Group Wizard to select the data protection method.":::
+
+1. Specify how long you want to keep data backed up to disk.
+
+ - **Retention range**: The number of days that disk recovery points are kept.
+ - **Express Full Backup**: How often disk recovery points are taken. To change the times or dates when short-term backups occur, select **Modify**.
+
+ :::image type="content" source="media/azure-vmware-solution-backup/new-protection-group-specify-short-term-goals.png" alt-text="Screenshot showing the retention range to specify short-term recovery goals for disk-based protection.":::
+
+1. On the **Review Disk Storage Allocation** page, review the disk space provided for the VM backups.
+
+ - The recommended disk allocations are based on the retention range you specified, the type of workload, and the size of the protected data. Make any changes required, and then select **Next**.
+ - **Data size:** Size of the data in the protection group.
+ - **Disk space:** Recommended amount of disk space for the protection group. If you want to modify this setting, select space lightly larger than the amount you estimate each data source grows.
+ - **Storage pool details:** Shows the status of the storage pool, which includes total and remaining disk size.
+
+ :::image type="content" source="media/azure-vmware-solution-backup/review-disk-allocation.png" alt-text="Screenshot showing the Review disk Storage Allocation dialog to review your target storage assigned for each data source.":::
+
+ > [!NOTE]
+ > In some scenarios, the data size reported is higher than the actual VM size. We're aware of the issue and currently investigating it.
+
+1. On the **Choose Replica Creation Method** page, indicate how you want to take the initial backup, and select **Next**.
+
+ - The default is **Automatically over the network** and **Now**. If you use the default, specify an off-peak time. If you choose **Later**, specify a day and time.
+ - For large amounts of data or less-than-optimal network conditions, consider replicating the data offline by using removable media.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/name-protection-group.png" alt-text="Screenshot showing the Create New Protection Group Wizard to select the replica creation method.":::
+
+1. For **Consistency check options**, select how and when to automate the consistency checks and select **Next**.
+
+ - You can run consistency checks when replica data becomes inconsistent, or on a set schedule.
+ - If you don't want to configure automatic consistency checks, you can run a manual check by right-clicking the protection group **Perform Consistency Check**.
+
+1. On the **Specify Online Protection Data** page, select the VMs or VM folders that you want to back up, and then select **Next**.
+
+ > [!TIP]
+ > You can select the members individually or choose **Select All** to choose all members.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/select-data-to-protect.png" alt-text="Screenshot showing the Create New Protection Group Wizard to specify the data that you would like DPM to help protect online.":::
+
+1. On the **Specify Online Backup Schedule** page, indicate how often you want to back up data from local storage to Azure.
+
+ - Cloud recovery points for the data to get generated according to the schedule.
+ - After the recovery point gets generated, it's then transferred to the Recovery Services vault in Azure.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/online-backup-schedule.png" alt-text="Screenshot showing the Create New Protection Group Wizard to specify online backup schedule which DPN will use to generate your protection plan.":::
+
+1. On the **Specify Online Retention Policy** page, indicate how long you want to keep the recovery points created from the backups to Azure.
+
+ - There's no time limit for how long you can keep data in Azure.
+ - The only limit is that you can't have more than 9,999 recovery points per protected instance. In this example, the protected instance is the VMware server.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/retention-policy.png" alt-text="Screenshot showing the Create New Protection Group Wizard to specify online retention policy.":::
+
+1. On the **Summary** page, review the settings and then select **Create Group**.
+
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/protection-group-summary.png" alt-text="Screenshot showing the Create New Protection Group Wizard to summary.":::
+
+## Monitor with the Azure Backup Server console
+
+After you configure the protection group to back up Azure VMware Solution VMs, you can monitor the status of the backup job and alert by using the Azure Backup Server console. Here's what you can monitor.
+
+- In the **Monitoring** task area:
+ - Under **Alerts**, you can monitor errors, warnings, and general information. You can view active and inactive alerts and set up email notifications.
+ - Under **Jobs**, you can view jobs started by Azure Backup Server for a specific protected data source or protection group. You can follow job progress or check resources consumed by jobs.
+- In the **Protection** task area, you can check the status of volumes and shares in the protection group. You can also check configuration settings such as recovery settings, disk allocation, and the backup schedule.
+- In the **Management** task area, you can view the **Disks, Online**, and **Agents** tabs to check the status of disks in the storage pool, registration to Azure, and deployed DPM agent status.
++
+## Restore VMware virtual machines
+
+In the Azure Backup Server Administrator Console, there are two ways to find recoverable data. You can search or browse. When you recover data, you might or might not want to restore data or a VM to the same location. For this reason, Azure Backup Server supports three recovery options for VMware VM backups:
+
+- **Original location recovery (OLR)**: Use OLR to restore a protected VM to its original location. You can restore a VM to its original location only if no disks were added or deleted since the backup occurred. If disks were added or deleted, you must use alternate location recovery.
+- **Alternate location recovery (ALR)**: Use when the original VM is missing, or you don't want to disturb the original VM. Provide the location of an ESXi host, resource pool, folder, and the storage datastore and path. To help differentiate the restored VM from the original VM, Azure Backup Server appends *"-Recovered"* to the name of the VM.
+- **Individual file location recovery (ILR)**: If the protected VM is a Windows Server VM, individual files or folders inside the VM can be recovered by using the ILR capability of Azure Backup Server. To recover individual files, see the procedure later in this article. Restoring an individual file from a VM is available only for Windows VM and disk recovery points.
+
+### Restore a recovery point
+
+1. In the Azure Backup Server Administrator Console, select the **Recovery** view.
+
+1. Using the **Browse** pane, browse or filter to find the VM you want to recover. After you select a VM or folder, the **Recovery points for pane display the available recovery points.
+
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/recovery-points.png" alt-text="Screenshot showing the available recovery points for VMware server.":::
+
+1. In the **Recovery points for** pane, select a date when a recovery point was taken. Calendar dates in bold have available recovery points. Alternately, you can right-click the VM and select **Show all recovery points** and then select the recovery point from the list.
+
+ > [!NOTE]
+ > For short-term protection, select a disk-based recovery point for faster recovery. After short-term recovery points expire, you see only **Online** recovery points to recover.
+
+1. Before recovering from an online recovery point, ensure the staging location contains enough free space to house the full uncompressed size of the VM you want to recover. The staging location can be viewed or changed by running the **Configure Subscription Settings Wizard**.
+
+ :::image type="content" source="media/azure-vmware-solution-backup/mabs-recovery-folder-settings.png" alt-text="Screenshot showing the recovery folder location.":::
+
+1. Select **Recover** to open the **Recovery Wizard**.
+
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/recovery-wizard.png" alt-text="Screenshot showing the Recovery Wizard review dialog.":::
+
+1. Select **Next** to go to the **Specify Recovery Options** screen. Select **Next** again to go to the **Select Recovery Type** screen.
+
+ > [!NOTE]
+ > VMware workloads don't support enabling network bandwidth throttling.
+
+1. On the **Select Recovery Type** page, either recover to the original instance or a new location.
+
+ - If you choose **Recover to original instance**, you don't need to make any more choices in the wizard. The data for the original instance is used.
+ - If you choose **Recover as virtual machine on any host**, then on the **Specify Destination** screen, provide the information for **ESXi Host**, **Resource Pool**, **Folder**, and **Path**.
+
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/recovery-type.png" alt-text="Screenshot showing the Recovery Wizard to select recovery type.":::
+
+1. On the **Summary** page, review your settings and select **Recover** to start the recovery process.
+
+ The **Recovery status** screen shows the progression of the recovery operation.
+
+### Restore an individual file from a VM
+
+You can restore individual files from a protected VM recovery point. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the files you want before you start the recovery process.
+
+> [!NOTE]
+> Restoring an individual file from a VM is available only for Windows VM and disk recovery points.
+
+1. In the Azure Backup Server Administrator Console, select the **Recovery** view.
+
+1. Using the **Browse** pane, browse or filter to find the VM you want to recover. After you select a VM or folder, the **Recovery points for pane display the available recovery points.
+
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/vmware-rp-disk.png" alt-text="Screenshot showing the recovery points for VMware server.":::
+
+1. In the **Recovery points for** pane, use the calendar to select the date that contains the wanted recovery points. Depending on how the backup policy was configured, dates can have more than one recovery point.
+
+1. After you select the day when the recovery point was taken, make sure you choose the correct **Recovery time**.
+
+ > [!NOTE]
+ > If the selected date has multiple recovery points, choose your recovery point by selecting it in the **Recovery time** drop-down menu.
+
+ After you choose the recovery point, the list of recoverable items appears in the **Path** pane.
+
+1. To find the files you want to recover, in the **Path** pane, double-click the item in the **Recoverable Item** column to open it. Then select the file or folders you want to recover. To select multiple items, select the **Ctrl** key while you select each item. Use the **Path** pane to search the list of files or folders that appear in the **Recoverable Item** column.
+
+ > [!NOTE]
+ > **Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the **Up** button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
+
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/vmware-rp-disk-ilr-2.png" alt-text="Screenshot showing the date and time for the available recovery points selected.":::
+
+1. When you've selected the items for recovery, in the Administrator Console tool ribbon, select **Recover** to open the **Recovery Wizard**. In the **Recovery Wizard**, the **Review Recovery Selection** screen shows the selected items to be recovered.
+
+1. On the **Specify Recovery Options** screen, do one of the following steps:
+
+ - Select **Modify** to enable network bandwidth throttling. In the **Throttle** dialog box, select **Enable network bandwidth usage throttling** to turn it on. Once enabled, configure the **Settings** and **Work Schedule**.
+ - Select **Next** to leave network throttling disabled.
+
+1. On the **Select Recovery Type** screen, select **Next**. You can only recover your files or folders to a network folder.
+
+1. On the **Specify Destination** screen, select **Browse** to find a network location for your files or folders. Azure Backup Server creates a folder where all recovered items are copied. The folder name has the prefix MABS_day-month-year. When you select a location for the recovered files or folder, the details for that location are provided.
+
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/specify-destination.png" alt-text="Screenshot showing the date and time, destination, and destination path for the available recovery points selected.":::
+
+1. On the **Specify Recovery Options** screen, choose which security setting to apply. You can opt to modify the network bandwidth usage throttling, but throttling is disabled by default. Also, **SAN Recovery** and **Notification** aren't enabled.
+
+1. On the **Summary** screen, review your settings and select **Recover** to start the recovery process. The **Recovery status** screen shows the progression of the recovery operation.
+
+## Next steps
+
+Now that you've covered backing up your Azure VMware Solution VMs with Azure Backup Server, you may want to learn about:
+
+- [Troubleshooting when setting up backups in Azure Backup Server](../backup/backup-azure-mabs-troubleshoot.md).
+- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
azure-vmware Configure Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-windows-server-failover-cluster.md
Now that you've covered setting up a WSFC in Azure VMware Solution, you may want
- Setting up your new WSFC by adding more applications that require the WSFC capability. For instance, SQL Server and SAP ASCS. - Setting up a backup solution.
- - [Setting up Azure Backup Server for Azure VMware Solution](../backup/backup-azure-microsoft-azure-backup.md?context=%2fazure%2fazure-vmware%2fcontext%2fcontext)
- - [Backup solutions for Azure VMware Solution virtual machines](../backup/backup-azure-backup-server-vmware.md?context=%2fazure%2fazure-vmware%2fcontext%2fcontext)
+ - [Setting up Azure Backup Server for Azure VMware Solution](set-up-backup-server-for-azure-vmware-solution.md)
+ - [Backup solutions for Azure VMware Solution virtual machines](backup-azure-vmware-solution-virtual-machines.md)
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
The **AVS Interconnect** feature lets you create a network connection between tw
You can connect a private cloud to multiple private clouds, and the connections are non-transitive. For example, if _private cloud 1_ is connected to _private cloud 2_, and _private cloud 2_ is connected to _private cloud 3_, private clouds 1 and 3 would not communicate until they were directly connected.
-You can only connect to private clouds in the same region. To connect to private clouds that are in different regions, [use ExpressRoute Global Reach](tutorial-expressroute-global-reach-private-cloud.md) to connect your private clouds in the same way you connect your private cloud to your on-premises circuit.
+You can only connect private clouds in the same region. To connect private clouds that are in different regions, [use ExpressRoute Global Reach](tutorial-expressroute-global-reach-private-cloud.md) to connect your private clouds in the same way you connect your private cloud to your on-premises circuit.
>[!IMPORTANT] >The AVS Interconnect (Preview) feature is currently in public preview.
The AVS Interconnect (Preview) feature is available in all regions except for So
Now that you've connected multiple private clouds in the same region, you may want to learn about: - [Move Azure VMware Solution resources to another region](move-azure-vmware-solution-across-regions.md)-- [Move Azure VMware Solution subscription to another subscription](move-ea-csp-subscriptions.md)
+- [Move Azure VMware Solution subscription to another subscription](move-ea-csp-subscriptions.md)
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
+
+ Title: Set up Azure Backup Server for Azure VMware Solution
+description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server.
+ Last updated : 02/04/2021++
+# Set up Azure Backup Server for Azure VMware Solution
+
+Azure Backup Server contributes to your business continuity and disaster recovery (BCDR) strategy. With Azure VMware Solution, you can only configure a virtual machine (VM)-level backup using Azure Backup Server.
+
+Azure Backup Server can store backup data to:
+
+- **Disk**: For short-term storage, Azure Backup Server backs up data to disk pools.
+- **Azure**: For both short-term and long-term storage off-premises, Azure Backup Server data stored in disk pools can be backed up to the Microsoft Azure cloud by using Azure Backup.
+
+Use Azure Backup Server to restore data to the source or an alternate location. That way, if the original data is unavailable because of planned or unexpected issues, you can restore data to an alternate location.
+
+This article helps you prepare your Azure VMware Solution environment to back up VMs by using Azure Backup Server. We walk you through the steps to:
+
+> [!div class="checklist"]
+> * Determine the recommended VM disk type and size to use.
+> * Create a Recovery Services vault that stores the recovery points.
+> * Set the storage replication for a Recovery Services vault.
+> * Add storage to Azure Backup Server.
+
+## Supported VMware features
+
+- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, just provide the IP address or fully qualified domain name (FQDN) and the sign in credentials used to authenticate the VMware server with Azure Backup Server.
+- **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup.
+- **Detect and protect VMs managed by vCenter:** Azure Backup Server detects and protects VMs deployed on a vCenter or ESXi server. Azure Backup Server also detects VMs managed by vCenter so that you can protect large deployments.
+- **Folder-level auto protection:** vCenter lets you organize your VMs in VM folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
+- **Azure Backup Server continues to protect vMotioned VMs within the cluster:** As VMs are vMotioned for load balancing within the cluster, Azure Backup Server automatically detects and continues VM protection.
+- **Recover necessary files faster:** Azure Backup Server can recover files or folders from a Windows VM without recovering the entire VM.
+
+## Limitations
+
+- Update Rollup 1 for Azure Backup Server v3 must be installed.
+- You can't back up user snapshots before the first Azure Backup Server backup. After Azure Backup Server finishes the first backup, then you can back up user snapshots.
+- Azure Backup Server can't protect VMware VMs with pass-through disks and physical raw device mappings (pRDMs).
+- Azure Backup Server can't detect or protect VMware vApps.
+
+To set up Azure Backup Server for Azure VMware Solution, you must finish the following steps:
+
+- Set up the prerequisites and environment.
+- Create a Recovery Services vault.
+- Download and install Azure Backup Server.
+- Add storage to Azure Backup Server.
+
+### Deployment architecture
+
+Azure Backup Server is deployed as an Azure infrastructure as a service (IaaS) VM to protect Azure VMware Solution VMs.
++
+## Prerequisites for the Azure Backup Server environment
+
+Consider the recommendations in this section when you install Azure Backup Server in your Azure environment.
+
+### Azure Virtual Network
+
+Ensure that you [configure networking for your VMware private cloud in Azure](tutorial-configure-networking.md).
+
+### Determine the size of the VM
+
+Follow the instructions in the [Create your first Windows VM in the Azure portal](../virtual-machines/windows/quick-create-portal.md) tutorial. You'll create the VM in the virtual network, which you created in the previous step. Start with a gallery image of Windows Server 2019 Datacenter to run the Azure Backup Server.
+
+The table summarizes the maximum number of protected workloads for each Azure Backup Server VM size. The information is based on internal performance and scale tests with canonical values for the workload size and churn. The actual workload size can be larger but should be accommodated by the disks attached to the Azure Backup Server VM.
+
+| Maximum protected workloads | Average workload size | Average workload churn (daily) | Minimum storage IOPS | Recommended disk type/size | Recommended VM size |
+|-|--|--||--||
+| 20 | 100 GB | Net 5% churn | 2,000 | Standard HDD (8 TB or above size per disk) | A4V2 |
+| 40 | 150 GB | Net 10% churn | 4,500 | Premium SSD* (1 TB or above size per disk) | DS3_V2 |
+| 60 | 200 GB | Net 10% churn | 10,500 | Premium SSD* (8 TB or above size per disk) | DS3_V2 |
+
+*To get the required IOPs, use minimum recommended- or higher-size disks. Smaller-size disks offer lower IOPs.
+
+> [!NOTE]
+> Azure Backup Server is designed to run on a dedicated, single-purpose server. You can't install Azure Backup Server on a computer that:
+> * Runs as a domain controller.
+> * Has the Application Server role installed.
+> * Is a System Center Operations Manager management server.
+> * Runs Exchange Server.
+> * Is a node of a cluster.
+
+### Disks and storage
+
+Azure Backup Server requires disks for installation.
+
+| Requirement | Recommended size |
+|-|-|
+| Azure Backup Server installation | Installation location: 3 GB<br />Database files drive: 900 MB<br />System drive: 1 GB for SQL Server installation<br /><br />You'll also need space for Azure Backup Server to copy the file catalog to a temporary installation location when you archive. |
+| Disk for storage pool<br />(Uses basic volumes, can't be on a dynamic disk) | Two to three times the protected data size.<br />For detailed storage calculation, see [DPM Capacity Planner](https://www.microsoft.com/download/details.aspx?id=54301). |
+
+To learn how to attach a new managed data disk to an existing Azure VM, see [Attach a managed data disk to a Windows VM by using the Azure portal](../virtual-machines/windows/attach-managed-disk-portal.md).
+
+> [!NOTE]
+> A single Azure Backup Server has a soft limit of 120 TB for the storage pool.
+
+### Store backup data on local disk and in Azure
+
+Storing backup data in Azure reduces backup infrastructure on the Azure Backup Server VM. For operational recovery (backup), Azure Backup Server stores backup data on Azure disks attached to the VM. After the disks and storage space are attached to the VM, Azure Backup Server manages the storage for you. The amount of storage depends on the number and size of disks attached to each Azure VM. Each size of the Azure VM has a maximum number of disks that can be attached. For example, A2 is four disks, A3 is eight disks, and A4 is 16 disks. Again, the size and number of disks determine the total backup storage pool capacity.
+
+> [!IMPORTANT]
+> You should *not* retain operational recovery data on Azure Backup Server-attached disks for more than five days. If data is more than five days old, store it in a Recovery Services vault.
+
+To store backup data in Azure, create or use a Recovery Services vault. When you prepare to back up the Azure Backup Server workload, you [configure the Recovery Services vault](#create-a-recovery-services-vault). Once configured, each time an online backup job runs, a recovery point gets created in the vault. Each Recovery Services vault holds up to 9,999 recovery points. Depending on the number of recovery points created and how long kept, you can keep backup data for many years. For example, you could create monthly recovery points and keep them for five years.
+
+> [!IMPORTANT]
+> Whether you send backup data to Azure or keep it locally, you must register Azure Backup Server with a Recovery Services vault.
+
+### Scale deployment
+
+If you want to scale your deployment, you have the following options:
+
+- **Scale up**: Increase the size of the Azure Backup Server VM from A series to DS3 series, and increase the local storage.
+- **Offload data**: Send older data to Azure and keep only the newest data on the storage attached to the Azure Backup Server machine.
+- **Scale out**: Add more Azure Backup Server machines to protect the workloads.
+
+### .NET Framework
+
+The VM must have .NET Framework 3.5 SP1 or higher installed.
+
+### Join a domain
+
+The Azure Backup Server VM must be joined to a domain. A domain user with administrator privileges on the VM must install Azure Backup Server.
+
+Azure Backup Server deployed in an Azure VM can back up workloads on the VMs in Azure VMware Solution. The workloads should be in the same domain to enable the backup operation.
+
+## Create a Recovery Services vault
+
+A Recovery Services vault is a storage entity that stores the recovery points created over time. It also contains backup policies that are associated with protected items.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/), and on the left menu, select **All services**.
+
+1. In the **All services** dialog box, enter **Recovery Services** and select **Recovery Services vaults** from the list.
+
+ The list of Recovery Services vaults in the subscription appears.
+
+1. On the **Recovery Services vaults** dashboard, select **Add**.
+
+ The **Recovery Services vault** dialog box opens.
+
+1. Enter values and then select **Create**.
+
+ - **Name**: Enter a friendly name to identify the vault. The name must be unique to the Azure subscription. Specify a name that has at least two but not more than 50 characters. The name must start with a letter and consist only of letters, numbers, and hyphens.
+ - **Subscription**: Choose the subscription to use. If you're a member of only one subscription, you'll see that name. If you're not sure which subscription to use, use the default (suggested) subscription. There are multiple choices only if your work or school account is associated with more than one Azure subscription.
+ - **Resource group**: Use an existing resource group or create a new one. To see the list of available resource groups in your subscription, select **Use existing**, and then select a resource from the drop-down list. To create a new resource group, select **Create new** and enter the name.
+ - **Location**: Select the geographic region for the vault. To create a vault to protect Azure VMware Solution virtual machines, the vault *must* be in the same region as the Azure VMware Solution private cloud.
+
+ It can take a while to create the Recovery Services vault. Monitor the status notifications in the **Notifications** area in the upper-right corner of the portal. After creating your vault, it's visible in the list of Recovery Services vaults. If you don't see your vault, select **Refresh**.
++
+## Set storage replication
+
+The storage replication option lets you choose between geo-redundant storage (the default) and locally redundant storage. Geo-redundant storage copies the data in your storage account to a secondary region, making your data durable. Locally redundant storage is a cheaper option that isn't as durable. To learn more about geo-redundant and locally redundant storage options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
+
+> [!IMPORTANT]
+> Changing the setting of **Storage replication type Locally-redundant/Geo-redundant** for a Recovery Services vault must be done before you configure backups in the vault. After you configure backups, the option to modify it is disabled, and you can't change the storage replication type.
+
+1. From **Recovery Services vaults**, select the new vault.
+
+1. Under **Settings**, select **Properties**. Under **Backup Configuration**, select **Update**.
+
+1. Select the storage replication type, and select **Save**.
+
+## Download and install the software package
+
+Follow the steps in this section to download, extract, and install the software package.
+
+### Download the software package
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. If you already have a Recovery Services vault open, continue to the next step.
+
+ >[!TIP]
+ >If you don't have a Recovery Services vault open, and you're in the Azure portal, in the list of resources enter **Recovery Services** > **Recovery Services vaults**.
+
+1. From the list of Recovery Services vaults, select a vault.
+
+ The selected vault dashboard opens.
+
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/vault-dashboard.png" alt-text="Screenshot showing the vault dashboard.":::
+
+ The **Settings** option opens by default. If closed, select **Settings** to open it.
+
+1. Select **Backup** to open the **Getting Started** wizard.
+
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/getting-started-backup.png" alt-text="Screenshot showing the Backup option selected under Getting Started wizard.":::
+
+1. In the window that opens:
+
+ 1. From the **Where is your workload running?** menu, select **On-Premises**.
+
+ :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-on-premises-workload.png" alt-text="Screenshot showing the options for where your workload runs and what to backup.":::
+
+ 1. From the **What do you want to back up?** menu, select the workloads you want to protect by using Azure Backup Server.
+
+ 1. Select **Prepare Infrastructure** to download and install Azure Backup Server and the vault credentials.
+
+ :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-prepare-infrastructure.png" alt-text="Screenshot showing the step to prepare the infrastructure.":::
+
+1. In the **Prepare infrastructure** window that opens:
+
+ 1. Select the **Download** link to install Azure Backup Server.
+
+ 1. Select **Already downloaded or using the latest Azure Backup Server installation** and then **Download** to download the vault credentials. You'll use these credentials when you register the Azure Backup Server to the Recovery Services vault. The links take you to the Download Center, where you download the software package.
+
+ :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-prepare-infrastructure-2.png" alt-text="Screenshot showing the steps to prepare the infrastructure for Azure Backup Server.":::
+
+1. On the download page, select all the files and select **Next**.
+
+ > [!NOTE]
+ > You must download all the files to the same folder. Because the download size of the files together is greater than 3 GB, it might take up to 60 minutes for the download to complete.
+
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/downloadcenter.png" alt-text="Screenshot showing Microsoft Azure Backup files to download.":::
+
+### Extract the software package
+
+If you downloaded the software package to a different server, copy the files to the VM you created to deploy Azure Backup Server.
+
+> [!WARNING]
+> At least 4 GB of free space is required to extract the setup files.
+
+1. After you've downloaded all the files, double-click **MicrosoftAzureBackupInstaller.exe** to open the **Microsoft Azure Backup** setup wizard, and then select **Next**.
+
+1. Select the location to extract the files to and select **Next**.
+
+1. Select **Extract** to begin the extraction process.
+
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/extract/03.png" alt-text="Screenshot showing Microsoft Azure Backup files ready to extract.":::
+
+1. Once extracted, select the option to **Execute setup.exe** and then select **Finish**.
+
+> [!TIP]
+> You can also locate the setup.exe file from the folder where you extracted the software package.
+
+### Install the software package
+
+1. On the setup window under **Install**, select **Microsoft Azure Backup** to open the setup wizard.
+
+1. On the **Welcome** screen, select **Next** to continue to the **Prerequisite Checks** page.
+
+1. To determine if the hardware and software meet the prerequisites for Azure Backup Server, select **Check Again**. If met successfully, select **Next**.
+
+1. The Azure Backup Server installation package comes bundled with the appropriate SQL Server binaries that are needed. When you start a new Azure Backup Server installation, select the **Install new Instance of SQL Server with this Setup** option. Then select **Check and Install**.
+
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/sql/01.png" alt-text="Screenshot showing the SQL settings dialog and the Install new instance of SQL Server with this Setup option selected.":::
+
+ > [!NOTE]
+ > If you want to use your own SQL Server instance, the supported SQL Server versions are SQL Server 2014 SP1 or higher, 2016, and 2017. All SQL Server versions should be Standard or Enterprise 64-bit. The instance used by Azure Backup Server must be local only; it can't be remote. If you use an existing SQL Server instance for Azure Backup Server, the setup only supports the use of *named instances* of SQL Server.
+
+ If a failure occurs with a recommendation to restart the machine, do so, and select **Check Again**. For any SQL Server configuration issues, reconfigure SQL Server according to the SQL Server guidelines. Then retry to install or upgrade Azure Backup Server using the existing instance of SQL Server.
+
+ **Manual configuration**
+
+ When you use your own SQL Server instance, make sure you add builtin\Administrators to the sysadmin role to the master database's sysadmin role.
+
+ **Configure reporting services with SQL Server 2017**
+
+ If you use your instance of SQL Server 2017, you must configure SQL Server 2017 Reporting Services (SSRS) manually. After configuring SSRS, make sure to set the **IsInitialized** property of SSRS to **True**. When set to **True**, Azure Backup Server assumes that SSRS is already configured and skips the SSRS configuration.
+
+ To check the SSRS configuration status, run:
+
+ ```powershell
+ $configset =Get-WmiObject ΓÇônamespace
+ "root\Microsoft\SqlServer\ReportServer\RS_SSRS\v14\Admin" -class
+ MSReportServer_ConfigurationSetting -ComputerName localhost
+
+ $configset.IsInitialized
+ ```
+
+ Use the following values for SSRS configuration:
+
+ * **Service Account**: **Use built-in account** should be **Network Service**.
+ * **Web Service URL**: **Virtual Directory** should be **ReportServer_\<SQLInstanceName>**.
+ * **Database**: **DatabaseName** should be **ReportServer$\<SQLInstanceName>**.
+ * **Web Portal URL**: **Virtual Directory** should be **Reports_\<SQLInstanceName>**.
+
+ [Learn more](/sql/reporting-services/report-server/configure-and-administer-a-report-server-ssrs-native-mode) about SSRS configuration.
+
+ > [!NOTE]
+ > [Microsoft Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (OST) governs the licensing for SQL Server used as the database for Azure Backup Server. According to OST, only use SQL Server bundled with Azure Backup Server as the database for Azure Backup Server.
+
+1. After the installation is successful, select **Next**.
+
+1. Provide a location for installing Microsoft Azure Backup Server files, and select **Next**.
+
+ > [!NOTE]
+ > The scratch location is required for backup to Azure. Ensure the scratch location is at least 5% of the data planned for backing up to the cloud. For disk protection, separate disks need configuring after the installation finishes. For more information about storage pools, see [Configure storage pools and disk storage](/previous-versions/system-center/system-center-2012-r2/hh758075(v=sc.12)).
+
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/space-screen.png" alt-text="Screenshot showing the SQL Server settings.":::
+
+1. Provide a strong password for restricted local user accounts, and select **Next**.
+
+1. Select whether you want to use Microsoft Update to check for updates, and select **Next**.
+
+ > [!NOTE]
+ > We recommend having Windows Update redirect to Microsoft Update, which offers security and important updates for Windows and other products like Azure Backup Server.
+
+1. Review the **Summary of Settings**, and select **Install**.
+
+ The installation happens in phases.
+ - The first phase installs the Microsoft Azure Recovery Services Agent.
+ - The second phase checks for internet connectivity. If available, you can continue with the installation. If not available, you must provide proxy details to connect to the internet.
+ - The final phase checks the prerequisite software. If not installed, any missing software gets installed along with the Microsoft Azure Recovery Services Agent.
+
+1. Select **Browse** to locate your vault credentials to register the machine to the Recovery Services vault, and then select **Next**.
+
+1. Select a passphrase to encrypt or decrypt the data sent between Azure and your premises.
+
+ > [!TIP]
+ > You can automatically generate a passphrase or provide your minimum 16-character passphrase.
+
+1. Enter the location to save the passphrase, and then select **Next** to register the server.
+
+ > [!IMPORTANT]
+ > Save the passphrase to a safe location other than the local server. We strongly recommend using the Azure Key Vault to store the passphrase.
+
+ After the Microsoft Azure Recovery Services Agent setup finishes, the installation step moves on to the installation and configuration of SQL Server and the Azure Backup Server components.
+
+1. After the installation step finishes, select **Close**.
+
+### Install Update Rollup 1
+
+Installing the Update Rollup 1 for Azure Backup Server v3 is mandatory before you can protect the workloads. You can find the bug fixes and installation instructions in the [knowledge base article](https://support.microsoft.com/en-us/help/4534062/).
+
+## Add storage to Azure Backup Server
+
+Azure Backup Server v3 supports Modern Backup Storage that offers:
+
+- Storage savings of 50%.
+- Backups that are three times faster.
+- More efficient storage.
+- Workload-aware storage.
+
+### Volumes in Azure Backup Server
+
+Add the data disks with the Azure Backup Server VM's required storage capacity if not already added.
+
+Azure Backup Server v3 only accepts storage volumes. When you add a volume, Azure Backup Server formats the volume to Resilient File System (ReFS), which Modern Backup Storage requires.
+
+### Add volumes to Azure Backup Server disk storage
+
+1. In the **Management** pane, rescan the storage and then select **Add**.
+
+1. Select from the available volumes to add to the storage pool.
+
+1. After you add the available volumes, give them a friendly name to help you manage them.
+
+1. Select **OK** to format these volumes to ReFS so that Azure Backup Server can use Modern Backup Storage benefits.
++
+## Next steps
+
+Now that you've covered how to set up Azure Backup Server for Azure VMware Solution, you may want to learn about:
+
+- [Configuring backups for your Azure VMware Solution VMs](backup-azure-vmware-solution-virtual-machines.md).
+- [Protecting your Azure VMware Solution VMs with Azure Security Center integration](azure-security-integration.md).
backup Azure Policy Configure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-policy-configure-diagnostics.md
To simplify the creation of diagnostics settings at scale (with LA as the destin
* Management Group scope is currently unsupported.
-* The built-in policy is currently not available in national clouds.
- [!INCLUDE [backup-center.md](../../includes/backup-center.md)] ## Assigning the built-in policy to a scope To assign the policy for vaults in the required scope, follow the steps below:
-1. Sign in to the Azure portal and navigate to the **Policy** Dashboard.
-2. Select **Definitions** in the left menu to get a list of all built-in policies across Azure Resources.
-3. Filter the list for **Category=Backup**. Locate the policy named **[Preview]: Deploy Diagnostic Settings for Recovery Services Vault to Log Analytics workspace for resource-specific categories**.
+1. Sign in to the Azure portal and navigate to the **Backup center** dashboard.
+2. Select **Azure policies for backup** in the left menu to get a list of all built-in policies across Azure Resources.
+3. Locate the policy named **Deploy Diagnostic Settings for Recovery Services Vault to Log Analytics workspace for resource-specific categories**.
![Policy Definition pane](./media/backup-azure-policy-configure-diagnostics/policy-definition-blade.png)
backup Backup Azure Auto Enable Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-auto-enable-backup.md
Today, Azure Backup provides a variety of built-in policies (using [Azure Policy
If your organization has a central backup team that manages backups across application teams, you can use this policy to configure backup to an existing central Recovery Services vault in the same subscription and location as the VMs being governed. You can choose to **exclude** VMs which contain a certain tag, from the scope of this policy.
-## Policy 2 - [Preview] Configure backup on VMs with a given tag to an existing recovery services vault in the same location
+## Policy 2 - Configure backup on VMs with a given tag to an existing recovery services vault in the same location
This policy works the same as Policy 1 above, with the only difference being that you can use this policy to **include** VMs which contain a certain tag, in the scope of this policy.
-## Policy 3 - [Preview] Configure backup on VMs without a given tag to a new recovery services vault with a default policy
+## Policy 3 - Configure backup on VMs without a given tag to a new recovery services vault with a default policy
If you organize applications in dedicated resource groups and want to have them backed up by the same vault, this policy allows you to automatically manage this action. You can choose to **exclude** VMs which contain a certain tag, from the scope of this policy.
-## Policy 4 - [Preview] Configure backup on VMs with a given tag to a new recovery services vault with a default policy
+## Policy 4 - Configure backup on VMs with a given tag to a new recovery services vault with a default policy
This policy works the same as Policy 3 above, with the only difference being that you can use this policy to **include** VMs which contain a certain tag, in the scope of this policy. In addition to the above, Azure Backup also provides an [audit-only](../governance/policy/concepts/effects.md#audit) policy - **Azure Backup should be enabled for Virtual Machines**. This policy identifies which virtual machines do not have backup enabled, but doesn't automatically configure backups for these VMs. This is useful when you are only looking to evaluate the overall compliance of the VMs but not looking to take action immediately.
In addition to the above, Azure Backup also provides an [audit-only](../governan
* For Policies 1 and 2, the specified vault and the VMs configured for backup can be under different resource groups.
-* Policies 1, 2, 3 and 4 are currently not available in national clouds.
- * Policies 3 and 4 can be assigned to a single subscription at a time (or a resource group within a subscription). [!INCLUDE [backup-center.md](../../includes/backup-center.md)]
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-monitoring-built-in-monitor.md
The following table summarizes the different backup alerts currently available (
| Security | Upcoming Purge | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM | For all workloads which support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service | | Security | Purge Complete | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM | Delete Backup Data | | Security | Soft Delete Disabled for Vault | Recovery Services vaults | This alert is fired when the soft-deleted backup data for an item has been permanently deleted by the Azure Backup service |
-| Jobs | Backup Failure | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Backup Agent <br><br> <li> Azure Files <br><br> <li> Azure Database for PostgreSQL Server <br><br> <li> Azure Blobs <br><br> <li> Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
+| Jobs | Backup Failure | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Backup Agent <br><br> <li> Azure Files <br><br> <li> Azure Database for PostgreSQL Server <br><br> <li> Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
| Jobs | Restore Failure | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM (non-AG scenarios) <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Backup Agent <br><br> <li> Azure Files <br><br> <li> Azure Database for PostgreSQL Server <br><br> <li> Azure Blobs <br><br> <li> Azure Managed Disks| This alert is fired when a restore job failure has occurred. By default, alerts for restore failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. | ### Turning on Azure Monitor alerts for job failure scenarios
backup Backup Azure Security Feature Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-security-feature-cloud.md
This flow chart shows the different steps and states of a backup item when Soft
## Enabling and disabling soft delete
-Soft delete is enabled by default on newly created vaults to protect backup data from accidental or malicious deletes. Disabling this feature isn't recommended. The only circumstance where you should consider disabling soft delete is if you're planning on moving your protected items to a new vault, and can't wait the 14 days required before deleting and reprotecting (such as in a test environment). Only the vault owner with the Contributor role (that provides permissions to perform Microsoft.RecoveryServices/Vaults/backupconfig/write on the vault) can disable this feature. If you disable this feature, all future deletions of protected items will result in immediate removal, without the ability to restore. Backup data that exists in soft deleted state before disabling this feature, will remain in soft deleted state for the period of 14 days. If you wish to permanently delete these immediately, then you need to undelete and delete them again to get permanently deleted.
+Soft delete is enabled by default on newly created vaults to protect backup data from accidental or malicious deletes. Disabling this feature isn't recommended. The only circumstance where you should consider disabling soft delete is if you're planning on moving your protected items to a new vault, and can't wait the 14 days required before deleting and reprotecting (such as in a test environment).
+
+To disable soft delete on a vault, you must have the Backup Contributor role for that vault (you should have permissions to perform Microsoft.RecoveryServices/Vaults/backupconfig/write on the vault). If you disable this feature, all future deletions of protected items will result in immediate removal, without the ability to restore. Backup data that exists in soft deleted state before disabling this feature, will remain in soft deleted state for the period of 14 days. If you wish to permanently delete these immediately, then you need to undelete and delete them again to get permanently deleted.
It's important to remember that once soft delete is disabled, the feature is disabled for all the types of workloads. For example, it's not possible to disable soft delete only for SQL server or SAP HANA DBs while keeping it enabled for virtual machines in the same vault. You can create separate vaults for granular control.
backup Backup Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-managed-disks.md
The following prerequisites are required to configure backup of managed disks:
- You canΓÇÖt change the Snapshot Resource Group that is assigned to a backup instance when you configure the backup of a disk.
- - During a backup operation, Azure Backup service creates a Storage Account in the Snapshot Resource Group, where the snapshots are stored. Only one Storage Account is created per a snapshot Resource Group. The account is reused across multiple Disk backup instances that use the same Resource Group as the Snapshot resource group.
+ - During a backup operation, Azure Backup creates a Storage Account in the Snapshot Resource Group. Only one Storage Account is created per a snapshot Resource Group. The account is reused across multiple Disk backup instances that use the same Resource Group as the Snapshot resource group.
- Snapshots are not stored in Storage Account. Managed-diskΓÇÖs incremental snapshots are ARM resources that are created on Resource group and not in a Storage Account.
To assign the role, follow these steps:
![Select disks to back up](./media/backup-managed-disks/select-disks-to-backup.png) >[!NOTE]
- >While the portal allows you to select multiple disks and configure backup, each disk is an individual backup instance. Currently Azure Disk Backup only supports backup of individual disks. Point-in-time backup of multiple disks attached to a virtual disk isn't supported.
+ >While the portal allows you to select multiple disks and configure backup, each disk is an individual backup instance. Currently Azure Disk Backup only supports backup of individual disks. Point-in-time backup of multiple disks attached to a virtual machine isn't supported.
> >When using the portal, you're limited to selecting disks within the same subscription. If you have several disks to be backed up or if the disks are spread across different subscription, you can use scripts to automate. >
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/private-endpoints.md
Title: Private Endpoints description: Understand the process of creating private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 05/07/2020 Last updated : 07/06/2021
While private endpoints are enabled for the vault, they're used for backup and r
| **Azure VM backup** | VM backup doesn't require you to allow access to any IPs or FQDNs. So it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to **trusted Microsoft services** if it's ACLΓÇÖed. | | **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
+>[!Note]
+>Private endpoints aren't supported with DPM and MABS servers.
+ ## Get started with creating private endpoints for Backup The following sections discuss the steps involved in creating and using private endpoints for Azure Backup inside your virtual networks.
bastion Bastion Connect Vm Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-connect-vm-ssh.md
In order to connect to the Linux VM via SSH, you must have the following ports o
## <a name="akv"></a>Connect: Using a private key stored in Azure Key Vault
->[!NOTE]
->The portal update for this feature is currently rolling out to regions.
->
- 1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown. :::image type="content" source="./media/bastion-connect-vm-ssh/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected":::
In order to connect to the Linux VM via SSH, you must have the following ports o
1. On the **Connect using Azure Bastion** page, enter the **Username** and select **SSH Private Key from Azure Key Vault**. :::image type="content" source="./media/bastion-connect-vm-ssh/ssh-key-vault.png" alt-text="SSH Private Key from Azure Key Vault":::
-1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key. If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/general/quick-create-portal.md) and store your SSH private key as the value of a new Key Vault secret.
+1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key. If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](https://docs.microsoft.com/azure/key-vault/secrets/quick-create-powershell) and store your SSH private key as the value of a new Key Vault secret.
+
+ >[!NOTE]
+ >Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](https://docs.microsoft.com/azure/virtual-machines/extensions/vmaccess#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ >
:::image type="content" source="./media/bastion-connect-vm-ssh/key-vault.png" alt-text="Azure Key Vault"::: Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). 1. Select the **Azure Key Vault Secret** dropdown and select the Key Vault secret containing the value of your SSH private key.
-1. Select **Connect** to connect to the VM. Once you click **Connect**, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
+3. Select **Connect** to connect to the VM. Once you click **Connect**, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
## Next steps
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-faq.md
At this time, IPv6 is not supported. Azure Bastion supports IPv4 only.
### Can I use Azure Bastion with Azure Private DNS Zones?
-The use of Azure Bastion with Private endpoint integrated Azure Private DNS Zones is not supported at this time. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a Private endpoint integrated private DNS zone.
+Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select does not overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following in the name:
+* core.windows.net
+* azure.com
+
+Note that if you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](https://docs.microsoft.com/azure/private-link/private-endpoint-dns#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion is *not* supported with these setups.
+
+The use of Azure Bastion is also not supported with Azure Private DNS Zones in national clouds.
+ ### <a name="rdpssh"></a>Do I need an RDP or SSH client?
cognitive-services FAQ https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/FAQ.md
- Title: Frequently asked questions - Computer Vision-
-description: Get answers to frequently asked questions about the Computer Vision API in Azure Cognitive Services.
------- Previously updated : 04/17/2019----
-# Computer Vision API Frequently Asked Questions
-
-> [!TIP]
-> If you can't find answers to your questions in this FAQ, try asking the Computer Vision API community on [StackOverflow](https://stackoverflow.com/questions/tagged/project-oxford+or+microsoft-cognitive) or contact Help and Support on [UserVoice](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395743)
---
-**Question**: *Can I train Computer Vision API to use custom tags? For example, I would like to feed in pictures of cat breeds to 'train' the AI, then receive the breed value on an AI request.*
-
-**Answer**: This function is currently not available. However, our engineers are working to bring this functionality to Computer Vision.
----
-**Question**: *Can I deploy the OCR (Read) capability on-premise?*
-
-**Answer**: Yes, the OCR (Read) cloud API is also available as a Docker container for on-premise deployment. Learn [how to deploy the OCR containers](./computer-vision-how-to-install-containers.md).
---
-**Question**: *Can Computer Vision be used to read license plates?*
-
-**Answer**: The Vision API includes the deep learning powered OCR capabilities with the latest Read feature. We are constantly trying to improve our services to work across all scenarios.
--
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
In this article, you learned concepts and workflow for downloading, installing,
* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings * Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container.
-* Refer to [Frequently asked questions (FAQ)](FAQ.md) to resolve issues related to Computer Vision functionality.
+* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Computer Vision functionality.
* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
The container also has the following container-specific configuration settings:
|No|Storage:ObjectStore:AzureBlob:ConnectionString| v3.x containers only. Azure blob storage connection string. | |No|Storage:TimeToLiveInDays| v3.x containers only. Result expiration period in days. The setting specifies when the system should clear recognition results. The default is 2 days (48 hours), which means any result live for longer than that period is not guaranteed to be successfully retrieved. | |No|Task:MaxRunningTimeSpanInMinutes| v3.x containers only. Maximum running time for a single request. The default is 60 minutes. |
+|No|EnableSyncNTPServer| v3.x containers only. Enables the NTP server synchronization mechanism, which ensures synchronization between the system time and expected task runtime. Note that this requires external network traffic. The default is `true`. |
+|No|NTPServerAddress| v3.x containers only. NTP server for the time sync-up. The default is `time.windows.com`. |
+|No|Mounts::Shared| v3.x containers only. Local folder for storing recognition result. The default is `/share`. For running container without using Azure blob storage, we recommend mounting a volume to this folder to ensure you have enough space for the recognition results. |
## ApiKey configuration setting
cognitive-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set
* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings * Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container.
-* Refer to [Frequently asked questions (FAQ)](FAQ.md) to resolve issues related to Computer Vision functionality.
+* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Computer Vision functionality.
* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Howtodetectfacesinimage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md
In this guide, you learned how to use the various functionalities of face detect
## Related topics - [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/client/faceapi)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
In this guide, you learned how to mitigate latency when using the Face service.
## Related topics - [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/client/faceapi)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/batch-transcription.md
Previously updated : 12/23/2020 Last updated : 06/17/2021
Call [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/serv
regularly from the service once you retrieved the results. Alternatively set `timeToLive` property to ensure eventual deletion of the results.
+> [!TIP]
+> You can use the [Ingestion Client](ingestion-client.md) tool and resulting solution to process high volume of audio.
+ ## Sample code Complete samples are available in the [GitHub sample repository](https://aka.ms/csspeech/samples) inside the `samples/batch` subdirectory.
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/ingestion-client.md
+
+ Title: Ingestion Client - Speech service
+
+description: In this article we describe a tool released on GitHub that enables customers push audio files to Speech Service easily and quickly
++++++ Last updated : 06/17/2021++++
+# Ingestion Client for the Speech service
+
+The Ingestion Client is a tool released on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/ingestion) that enables customers to transcribe audio files through Speech services quickly with little or no development effort. It works by wiring up a dedicated [Azure storage](https://azure.microsoft.com/product-categories/storage/) account to custom [Azure Functions](https://azure.microsoft.com/services/functions/) that use either the [REST API](rest-speech-to-text.md) or the [SDK](speech-sdk.md) in a serverless fashion to pass transcription requests to the service.
+
+## Architecture
+
+The tool helps those customers that want to get an idea of the quality of the transcript without making development investments up front. The tool connects a few resources to transcribe audio files that land in the dedicated [Azure Storage container](https://azure.microsoft.com/en-us/product-categories/storage/).
+
+Internally, the tool uses our V3.0 Batch API or SDK, and follows best practices to handle scale-up, retries and failover. The following schematic describes the resources and connections.
++
+The [Getting Started Guide for the Ingestion Client](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/ingestion/ingestion-client/Setup/guide.md) describes how to setup and use the tool.
+
+> [!IMPORTANT]
+> Pricing varies depending on the mode of operation (batch vs real time) as well as the Azure Function SKU selected. By default the tool will create a Premium Azure Function SKU to handle large volume. Visit the [Pricing](https://azure.microsoft.com/pricing/details/functions/) page for more information.
+
+Both, the Microsoft [Speech SDK](speech-sdk.md) and the [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30), can be used to obtain transcripts. The decision does impact overall costs as it is explained in the guide.
+
+> [!TIP]
+> You can use the tool and resulting solution in production to process a high volume of audio.
+
+## Tool customization
+
+The tool is built to show customers results quickly. You can customize the tool to your preferred SKUs and setup. The SKUs can be edited from the [Azure portal](https://portal.azure.com) and [the code itself is available on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+
+> [!NOTE]
+> We suggest creating the resources in the same dedicated resource group to understand and track costs more easily.
+
+## Next steps
+
+* [Get your Speech service trial subscription](https://azure.microsoft.com/try/cognitive-services/)
+* [Learn more about Speech SDK](./speech-sdk.md)
+* [Learn about the Speech CLI tool](./spx-overview.md)
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/regions.md
Title: Regions - Speech service
description: A list of available regions and endpoints for the Speech service, including speech-to-text, text-to-speech, and speech translation. -+ Previously updated : 08/20/2020 Last updated : 07/01/2021
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Previously updated : 01/08/2021 Last updated : 07/01/2021
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
Previously updated : 01/08/2021 Last updated : 07/01/2021
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-overview.md
## Set up your Azure account
-You need to have an Azure account and Speech service subscription before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+You need to have an Azure account and add a Speech service resource before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and resource, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
-> [!NOTE]
-> Please be sure to create a standard (S0) subscription. Free (F0) subscriptions aren't supported.
+After you create an Azure account and a Speech service resource:
-After you create an Azure account and a Speech service subscription:
-
-1. Sign in to the [Speech Studio](https://speech.microsoft.com).
-1. Select the subscription you need to work in and create a speech project.
-1. If you want to modify your subscription, select the cog button in the top menu.
+1. Sign in to the [Speech Studio](https://speech.microsoft.com) with your Azure account.
+1. Select the Speech service resource you need to get started. (You can change the resources anytime in "Settings" in the top menu.)
## Speech Studio features
cognitive-services Client Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/client-sdks.md
Previously updated : 06/22/2021 Last updated : 07/06/2021
That's it! You've created a program to translate documents in a blob container u
[documenttranslation_client_library_docs]: https://aka.ms/azsdk/net/documenttranslation/docs [documenttranslation_docs]: overview.md [documenttranslation_rest_api]: reference/rest-api-guide.md
-[documenttranslation_samples]: https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0-beta.1/sdk/translation/Azure.AI.Translation.Document/samples/README.md
+[documenttranslation_samples]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Document/samples
### [Python](#tab/python)
That's it! You've created a program to translate documents in a blob container u
[python-dt-client-library]: https://aka.ms/azsdk/python/documenttranslation/docs [python-rest-api]: reference/rest-api-guide.md [python-dt-product-docs]: overview.md
-[python-dt-samples]: https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0b1/sdk/translation/azure-ai-translation-document/samples
+[python-dt-samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/translation/azure-ai-translation-document/samples
-
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
To get started, you'll need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). * A [**Translator**](https://ms.portal.azure.com/#create/Microsoft) service resource (**not** a Cognitive Services multi-service resource. *See* [Create a new Azure resource](../../cognitive-services-apis-create-account.md#create-a-new-azure-cognitive-services-resource).
-* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
+* An [**Azure Blob Storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
### Create your tokens
Go to the [Azure portal](https://ms.portal.azure.com/#home) and navigate as foll
> [!div class="nextstepaction"] > [Get Started with Document Translation](get-started-with-document-translation.md) >
->
+>
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 06/22/2021 Last updated : 07/06/2021 # Get started with Document Translation
The table below lists the limits for data that you send to Document Translation.
Document Translation can not be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
+## Troubleshooting
+
+### Common HTTP status codes
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. |
+| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
+| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. When managing your subscription on the Azure portal, please ensure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
+| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+ ## Learn more * [Translator v3 API reference](../reference/v3-0-reference.md)
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/quickstart-translator.md
Previously updated : 09/14/2020 Last updated : 07/06/2021 keywords: translator, translator service, translate text, transliterate text, language detection
In this quickstart, you learn to use the Translator service via REST. You start
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) * Once you have an Azure subscription, [create a Translator resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+ * You'll need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code below later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
## Platform setup
After a successful call, you should see the following response. For more informa
## Troubleshooting
+### Common HTTP status codes
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. |
+| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
+| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
+| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+ ### Java users
-If you're encountering connection issues, it may be that your SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
+If you're encountering connection issues, it may be that your SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
## Next steps
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-reference.md
Previously updated : 8/11/2020 Last updated : 07/06/2021
Microsoft Translator is served out of multiple datacenter locations. Currently t
* **Asia Pacific:** Korea South, Japan East, Southeast Asia, and Australia East * **Europe:** North Europe, West Europe
-Requests to the Microsoft Translator are in most cases handled by the datacenter that is closest to where the request originated. In case of a datacenter failure, the request may be routed outside of the geography.
+Requests to the Microsoft Translator are in most cases handled by the datacenter that is closest to where the request originated. If there is a datacenter failure, the request may be routed outside of the geography.
To force the request to be handled by a specific geography, change the Global endpoint in the API request to the desired geographical endpoint: |Geography|Base URL (geographical endpoint)| |:--|:--|
-|Global (non-regional)| api.cognitive.microsofttranslator.com|
-|United States| api-nam.cognitive.microsofttranslator.com|
-|Europe| api-eur.cognitive.microsofttranslator.com|
-|Asia Pacific| api-apc.cognitive.microsofttranslator.com|
+|Global (non-regional)| api.cognitive.microsofttranslator.com|
+|United States| api-nam.cognitive.microsofttranslator.com|
+|Europe| api-eur.cognitive.microsofttranslator.com|
+|Asia Pacific| api-apc.cognitive.microsofttranslator.com|
-<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the ΓÇÿResource regionΓÇÖ ΓÇÿSwitzerland NorthΓÇÖ or ΓÇÿSwitzerland WestΓÇÖ, then use the resourceΓÇÖs custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with ΓÇÿResource regionΓÇÖ as ΓÇÿSwitzerland NorthΓÇÖ and your resource name is ΓÇÿmy-ch-nΓÇÖ then your custom endpoint is ΓÇ£https://my-ch-n.cognitiveservices.azure.comΓÇ¥. And a sample request to translate is:
+<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the ΓÇÿResource regionΓÇÖ ΓÇÿSwitzerland NorthΓÇÖ or ΓÇÿSwitzerland WestΓÇÖ, then use the resourceΓÇÖs custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with ΓÇÿResource regionΓÇÖ as ΓÇÿSwitzerland NorthΓÇÖ and your resource name is ΓÇÿmy-ch-nΓÇÖ, then your custom endpoint is ΓÇ£https://my-ch-n.cognitiveservices.azure.comΓÇ¥. And a sample request to translate is:
```curl // Pass secret key and region using headers to a custom endpoint curl -X POST " my-ch-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio
#### Authenticating with a regional resource When you use a [regional translator resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation).
-There are 2 headers that you need to call the Translator.
+There are two headers that you need to call the Translator.
|Headers|Description| |:--|:-|
curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio
When you use a Cognitive ServiceΓÇÖs multi-service resource. This allows you to use a single secret key to authenticate requests for multiple services.
-When you use a multi-service secret key, you must include two authentication headers with your request. There are 2 headers that you need to call the Translator.
+When you use a multi-service secret key, you must include two authentication headers with your request. There are two headers that you need to call the Translator.
|Headers|Description| |:--|:-|
An authentication token is valid for 10 minutes. The token should be reused when
## Virtual Network support
-The Translator service is now available with Virtual Network (VNET) capabilities in all regions of the Azure public cloud. To enable Virtual Network, please see [Configuring Azure Cognitive Services Virtual Networks](../../cognitive-services-virtual-networks.md?tabs=portal).
+The Translator service is now available with Virtual Network (VNET) capabilities in all regions of the Azure public cloud. To enable Virtual Network, *See* [Configuring Azure Cognitive Services Virtual Networks](../../cognitive-services-virtual-networks.md?tabs=portal).
Once you turn on this capability, you must use the custom endpoint to call the Translator. You cannot use the global translator endpoint ("api.cognitive.microsofttranslator.com") and you cannot authenticate with an access token.
For example, a customer with a free trial subscription would receive the followi
} } ```+ The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes are: | Code | Description |
The error code is a 6-digit number combining the 3-digit HTTP status code follow
| 403000| The operation is not allowed.| | 403001| The operation is not allowed because the subscription has exceeded its free quota.| | 405000| The request method is not supported for the requested resource.|
-| 408001| The translation system requested is being prepared. Please retry in a few minutes.|
+| 408001| The translation system requested is being prepared. Retry in a few minutes.|
| 408002| Request timed out waiting on incoming stream. The client did not produce a request within the time that the server was prepared to wait. The client may repeat the request without modifications at any later time.| | 415000| The Content-Type header is missing or invalid.| | 429000, 429001, 429002| The server rejected the request because the client has exceeded request limits.| | 500000| An unexpected error occurred. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
-| 503000| Service is temporarily unavailable. Please retry. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
+| 503000| Service is temporarily unavailable. Retry. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
## Metrics Metrics allow you to view the translator usage and availability information in Azure portal, under metrics section as shown in the below screenshot. For more information, see [Data and platform metrics](../../../azure-monitor/essentials/data-platform-metrics.md).
This table lists available metrics with description of how they are used to moni
| TotalErrors| Number of calls with error response.| | BlockedCalls| Number of calls that exceeded rate or quota limit.| | ServerErrors| Number of calls with server internal error(5XX).|
-| ClientErrors| Number of calls with client side error(4XX).|
+| ClientErrors| Number of calls with client-side error(4XX).|
| Latency| Duration to complete request in milliseconds.| | CharactersTranslated| Total number of characters in incoming text request.|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/overview.md
Previously updated : 09/14/2020 Last updated : 07/06/2021
-# What is Azure Metrics Advisor (preview)?
+# What is Azure Metrics Advisor?
Metrics Advisor is a part of [Azure Applied AI Services](../../applied-ai-services/what-are-applied-ai-services.md) that uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predicative maintenance, and business monitor applications on top of the service. Use Metrics Advisor to:
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
Last updated 06/30/2021
-# Call Automation APIs overview
-Call Automation APIs allow you to connect with your users at scale through automated business logic. You can use these APIs to create automated outbound reminder calls for appointments or to provide notifications for events like power outages or wildfires. Applications added to a call can monitor updates as participants join or leave, allowing you to implement reporting and logging.
+# Call Automation overview
+Call Automation APIs enable you to access voice and video calling capabilities from **services**. You can use these APIs to create service applications that drive automated outbound reminder calls for appointments or provide proactive notifications for events like power outages or wildfires. Service applications that join a call can monitor updates such as participants joining or leaving, allowing you to implement rich reporting and logging capabilities.
![in and out-of-call apps](../media/call-automation-apps.png)
Call Automation APIs are provided for both in-call (application-participant or a
| Start and manage call recording | | X | ## In-Call (App-Participant) APIs
-> [!NOTE]
-> In-Call applications are billed as call participants at [standard PSTN and VoIP rates](https://azure.microsoft.com/pricing/details/communication-services/).
-
-> [!NOTE]
-> In-Call actions are attributed to the App-participant associated with the `callConnectionId` used in the API call.
In-call APIs enable an application to take actions in a call as an app-participant. When an application answers or joins a call, a `callConnectionId` is assigned, which is used for in-call actions such as: - Add or remove call participants
Event notifications are sent as JSON payloads to the calling application via the
- Invite participant result ## Next steps
-Check out the [Call Automation Quickstart Sample](../../quickstarts/voice-video-calling/call-automation-api-sample.md) to learn more.
-
-Learn more about [Call Recording](./call-recording.md).
+Check out the [Call Automation Quickstart](../../quickstarts/voice-video-calling/call-automation-api-sample.md) to learn more.
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
# Calling Recording overview
-> [!NOTE]
-> Many countries and states have laws and regulations that apply to the recording of PSTN, voice, and video calls, which often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
-
-> [!NOTE]
-> Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
> [!NOTE] > Call Recording is currently only available for Communication Services resources created in the US region.
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
"eventTime": string // ISO 8601 date time for when the event was created } ```
+## Regulatory and privacy concerns
+
+Many countries and states have laws and regulations that apply to the recording of PSTN, voice, and video calls, which often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
+
+Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
## Next steps Check out the [Call Recoding Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn more.
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
-# Ensure high-quality media in Azure Communication Services
+# Network recommendations
-This document provides an overview of the factors and best practices that should be considered when building high-quality multimedia communication experiences with Azure Communication Services.
-
-## Factors that affect media quality and reliability
-
-There are many different factors that contribute to Azure Communication Services real-time media (audio, video, and application sharing) quality. These include network quality, bandwidth, firewall, host, and device configurations.
+This document summarizes how the network environment impacts voice and video calling quality. There are many different factors that contribute to Azure Communication Services real-time media (audio, video, and application sharing) quality. These include network quality, bandwidth, firewall, host, and device configurations.
### Network quality
The following documents may be interesting to you:
- Learn more about [calling libraries](./calling-sdk-features.md) - Learn about [Client-server architecture](../client-and-server-architecture.md)-- Learn about [Call flow topologies](../call-flows.md)
+- Learn about [Call flow topologies](../call-flows.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
Azure Communication Services are cloud-based services with REST APIs and client
1. Rich Text Chat 1. SMS
-Voice and video calling applications can interact with the publicly switched telephony network (PSTN). You can acquire phone numbers directly through Azure Communication Services REST APIs, SDKs, or the Azure portal. Azure Communication Services direct routing allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
+You can connect custom client endpoints, custom services, and the publicly switched telephony network (PSTN) to your communications application. You can acquire phone numbers directly through Azure Communication Services REST APIs, SDKs, or the Azure portal; and use these numbers for SMS or calling applications. Azure Communication Services direct routing allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
-In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Java (Android), Windows (.NET). Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
+In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Java (Android), Windows (.NET). A [UI library for web browsers](https://aka.ms/acsstorybook) can accelerate development for mobile and desktop browsers. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
Scenarios for Azure Communication Services include:
communication-services Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/support.md
With Azure, there are many [support options and plans](https://azure.microsoft.c
For quick and reliable answers to product or technical questions you might have about Azure Communication Services from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our community, engage with us on [Microsoft Q&A](/answers/products/azure).
-If you can't find an answer to your problem by searching you can, submit a new question to Microsoft Q&A. When creating a question make sure to use the [Azure Communication Services Tag](/answers/topics/azure-communication-services.html).
+If you can't find an answer to your problem by searching you can, submit a new question to Microsoft Q&A. When creating a question, make sure to use the [Azure Communication Services Tag](/answers/topics/azure-communication-services.html).
## Post a question on Stack Overflow
-You can also try asking your question on Stack Overflow, which has a large community developer community and ecosystem. Azure Communication Services has a [dedicated tag](https://stackoverflow.com/questions/tagged/azure-communication-services) there too.
+You can also try asking your question on Stack Overflow, which has a large community developer community and ecosystem. Azure Communication Services has a [dedicated tag](https://stackoverflow.com/questions/tagged/azure-communication-services) there too.
+
+## Provide feedback
+
+To provide feedback on specific functionalities Azure Communication Services provide on Azure portal, submit your feedback via buttons or links that have this icon :::image type="content" source="./media/give-feedback-icon.png" alt-text="Image of Give Feedback Icon.":::.
+
+Here are some examples:
+- To give feedback about phone numbers, click on the ΓÇ£Give feedbackΓÇ¥ button in the command bar of the Phone Numbers blade.
+- To give feedback about the connection experience to a notification hub, click on the following link as shown below.
+
+We appreciate your feedback and energy helping us improve our services. Let us know if you are satisfied with Azure Communication Services through this [survey](https://aka.ms/ACS_CAT_Survey).
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-azureblobstorage.md
tags: connectors
# Create and manage blobs in Azure Blob Storage by using Azure Logic Apps
-You can access and manage files stored as blobs in your Azure storage account within Azure Logic Apps using the [Azure Blob Storage connector](/connectors/azureblobconnector/). This connector provides triggers and actions for blob operations within your logic app workflows. You can use these operations to automate tasks and workflows for managing the files in your storage account. [Available connector actions](/connectors/azureblobconnector/#actions) include checking, deleting, reading, and uploading blobs. The [available trigger](/azureblobconnector/#triggers) fires when a blob is added or modified.
+You can access and manage files stored as blobs in your Azure storage account within Azure Logic Apps using the [Azure Blob Storage connector](/connectors/azureblobconnector/). This connector provides triggers and actions for blob operations within your logic app workflows. You can use these operations to automate tasks and workflows for managing the files in your storage account. [Available connector actions](/connectors/azureblobconnector/#actions) include checking, deleting, reading, and uploading blobs. The [available trigger](/connectors/azureblobconnector/#triggers) fires when a blob is added or modified.
You can connect to Blob Storage from both Standard and Consumption logic app resource types. You can use the connector with logic apps in a single-tenant, multi-tenant, or integration service environment (ISE). For logic apps in a single-tenant environment, Blob Storage provides built-in operations and also managed connector operations.
Next, [enable managed identity support](../logic-apps/create-managed-service-ide
### Enable support for managed identity in logic app
-Next, add an [HTTP trigger or action](/connectors/connectors-native-http) in your workflow. Make sure to [set the authentication type to use the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
+Next, add an [HTTP trigger or action](connectors-native-http.md) in your workflow. Make sure to [set the authentication type to use the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
The steps are the same for logic apps in both single-tenant and multi-tenant environments.
container-registry Container Registry Tasks Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tasks-reference-yaml.md
Title: YAML reference - ACR Tasks description: Reference for defining tasks in YAML for ACR Tasks, including task properties, step types, step properties, and built-in variables.-+ Last updated 07/08/2020
cosmos-db Access Secrets From Keyvault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/access-secrets-from-keyvault.md
Title: Use Key Vault to store and access Azure Cosmos DB keys description: Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, endpoints. --++ ms.devlang: dotnet
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/best-practice-dotnet.md
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
| <input type="checkbox" unchecked /> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. | | <input type="checkbox" unchecked /> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). | | <input type="checkbox" unchecked /> | Regions | Make sure to run your application in the same [Azure region](distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](distribute-data-globally.md). For production workloads, enable [automatic failover](how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
-| <input type="checkbox" unchecked /> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](/azure/cosmos-db/tutorial-global-distribution-sql-api?tabs=dotnetv3%2Capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
+| <input type="checkbox" unchecked /> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](/azure/cosmos-db/tutorial-global-distribution-sql-api?tabs=dotnetv3%2Capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
| <input type="checkbox" unchecked /> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. | | <input type="checkbox" unchecked /> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. | | <input type="checkbox" unchecked /> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).|
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/bulk-executor-graph-dotnet.md
Title: Use the graph bulk executor .NET library with Azure Cosmos DB Gremlin API description: Learn how to use the bulk executor library to massively import graph data into an Azure Cosmos DB Gremlin API container.- Last updated 05/28/2019--++
Setting|Description
* To learn about NuGet package details and release notes of bulk executor .NET library, see [bulk executor SDK details](sql-api-sdk-bulk-executor-dot-net.md). * Check out the [Performance Tips](./bulk-executor-dot-net.md#performance-tips) to further optimize the usage of bulk executor.
-* Review the [BulkExecutor.Graph Reference article](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph) for more details about the classes and methods defined in this namespace.
+* Review the [BulkExecutor.Graph Reference article](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph) for more details about the classes and methods defined in this namespace.
cosmos-db Cassandra Api Load Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-api-load-data.md
Title: 'Tutorial: Java app to load sample data into a Cassandra API table in Azure Cosmos DB' description: This tutorial shows how to load sample user data to a Cassandra API table in Azure Cosmos DB by using a java application.- Last updated 05/20/2019-++ #Customer intent: As a developer, I want to build a Java application to load data to a Cassandra API table in Azure Cosmos DB so that customers can store and manage the key/value data and utilize the global distribution, elastic scaling, multi-region , and other capabilities offered by Azure Cosmos DB.
cosmos-db Cassandra Api Query Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-api-query-data.md
Title: 'Tutorial: Query data from a Cassandra API account in Azure Cosmos DB' description: This tutorial shows how to query user data from an Azure Cosmos DB Cassandra API account by using a Java application. --++
cosmos-db Cassandra Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-change-feed.md
Title: Change feed in the Azure Cosmos DB API for Cassandra description: Learn how to use change feed in the Azure Cosmos DB API for Cassandra to get the changes made to your data.- Last updated 11/25/2019+
cosmos-db Cassandra Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-import-data.md
Title: 'Migrate your data to a Cassandra API account in Azure Cosmos DB- Tutorial' description: In this tutorial, learn how to copy data from Apache Cassandra to a Cassandra API account in Azure Cosmos DB.--++
cosmos-db Cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-introduction.md
Title: Introduction to the Azure Cosmos DB Cassandra API description: Learn how you can use Azure Cosmos DB to "lift-and-shift" existing applications and build new applications by using the Cassandra drivers and CQL --++
cosmos-db Cassandra Spark Aggregation Ops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-aggregation-ops.md
Title: Aggregate operations on Azure Cosmos DB Cassandra API tables from Spark description: This article covers basic aggregation operations against Azure Cosmos DB Cassandra API tables from Spark--++
cosmos-db Cassandra Spark Create Ops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-create-ops.md
Title: Create or insert data into Azure Cosmos DB Cassandra API from Spark description: This article details how to insert sample data into Azure Cosmos DB Cassandra API tables--++
cosmos-db Cassandra Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-databricks.md
Title: Access Azure Cosmos DB Cassandra API from Azure Databricks description: This article covers how to work with Azure Cosmos DB Cassandra API from Azure Databricks.--++
cosmos-db Cassandra Spark Delete Ops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-delete-ops.md
Title: Delete operations on Azure Cosmos DB Cassandra API from Spark description: This article details how to delete data in tables in Azure Cosmos DB Cassandra API from Spark--++
cosmos-db Cassandra Spark Generic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-generic.md
Title: Working with Azure Cosmos DB Cassandra API from Spark description: This article is the main page for Cosmos DB Cassandra API integration from Spark.--++
cosmos-db Cassandra Spark Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-hdinsight.md
Title: Access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight description: This article covers how to work with Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight--++
cosmos-db Cassandra Spark Table Copy Ops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-table-copy-ops.md
Title: Table copy operations on Azure Cosmos DB Cassandra API from Spark description: This article details how to copy data between tables in Azure Cosmos DB Cassandra API--++
cosmos-db Cassandra Spark Upsert Ops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-spark-upsert-ops.md
Title: Upsert data into Azure Cosmos DB Cassandra API from Spark description: This article details how to upsert into tables in Azure Cosmos DB Cassandra API from Spark--++
cosmos-db Change Feed Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-functions.md
Title: How to use Azure Cosmos DB change feed with Azure Functions description: Use Azure Functions to connect to Azure Cosmos DB change feed. Later you can create reactive Azure functions that are triggered on every new event.--++
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed.md
Title: Working with the change feed support in Azure Cosmos DB description: Use Azure Cosmos DB change feed support to track changes in documents, event-based processing like triggers, and keep caches and analytic systems up-to-date --++ Last updated 06/07/2021
cosmos-db Connect Mongodb Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/connect-mongodb-account.md
Title: Connect a MongoDB application to Azure Cosmos DB description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal--++
cosmos-db Create Cassandra Api Account Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-api-account-java.md
Title: 'Tutorial: Build Java app to create Azure Cosmos DB Cassandra API account' description: This tutorial shows how to create a Cassandra API account, add a database (also called a keyspace), and add a table to that account by using a Java application.--++
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-dotnet.md
Title: Build an Azure Cosmos DB .NET Framework, Core application using the Gremlin API description: Presents a .NET Framework/Core code sample you can use to connect to and query Azure Cosmos DB-++ ms.devlang: dotnet Last updated 02/21/2020-
cosmos-db Create Graph Gremlin Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-gremlin-console.md
Title: 'Query with Azure Cosmos DB Gremlin API using TinkerPop Gremlin Console: Tutorial' description: An Azure Cosmos DB quickstart to creates vertices, edges, and queries using the Azure Cosmos DB Gremlin API.- Last updated 07/10/2020-++ # Quickstart: Create, query, and traverse an Azure Cosmos DB graph database using the Gremlin console [!INCLUDE[appliesto-gremlin-api](includes/appliesto-gremlin-api.md)]
cosmos-db Create Graph Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-java.md
Title: Build a graph database with Java in Azure Cosmos DB description: Presents a Java code sample you can use to connect to and query graph data in Azure Cosmos DB using Gremlin.- ms.devlang: java Last updated 03/26/2019-++
cosmos-db Create Graph Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-nodejs.md
Title: Build an Azure Cosmos DB Node.js application by using Gremlin API description: Presents a Node.js code sample you can use to connect to and query Azure Cosmos DB- ms.devlang: nodejs Last updated 06/05/2019-++ # Quickstart: Build a Node.js application by using Azure Cosmos DB Gremlin API account
cosmos-db Create Graph Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-php.md
Title: 'Quickstart: Gremlin API with PHP - Azure Cosmos DB' description: This quickstart shows how to use the Azure Cosmos DB Gremlin API to create a console application with the Azure portal and PHP- ms.devlang: php Last updated 01/05/2019-++ # Quickstart: Create a graph database in Azure Cosmos DB using PHP and the Azure portal
cosmos-db Create Graph Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-python.md
Title: 'Quickstart: Gremlin API with Python - Azure Cosmos DB' description: This quickstart shows how to use the Azure Cosmos DB Gremlin API to create a console application with the Azure portal and Python- ms.devlang: python Last updated 03/29/2021-++
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-dotnet.md
Title: Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK description: Presents a .NET code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.--++
cosmos-db Create Mongodb Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-java.md
Title: 'Quickstart: Build a web app using the Azure Cosmos DB API for Mongo DB and Java SDK' description: Learn to build a Java code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.--++ ms.devlang: java
cosmos-db Create Mongodb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-nodejs.md
Title: 'Quickstart: Connect a Node.js MongoDB app to Azure Cosmos DB' description: This quickstart demonstrates how to connect an existing MongoDB app written in Node.js to Azure Cosmos DB.--++ ms.devlang: nodejs
cosmos-db Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/data-explorer.md
Title: Use Azure Cosmos DB Explorer to manage your data description: Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB.- Last updated 09/23/2020-++
cosmos-db Find Request Unit Charge Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-cassandra.md
Title: Find request unit (RU) charge for a Cassandra API query in Azure Cosmos DB description: Learn how to find the request unit (RU) charge for Cassandra queries executed against an Azure Cosmos container. You can use the Azure portal, .NET and Java drivers to find the RU charge. -++ Last updated 10/14/2020- # Find the request unit charge for operations executed in Azure Cosmos DB Cassandra API
cosmos-db Find Request Unit Charge Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-gremlin.md
Title: Find request unit (RU) charge for Gremlin API queries in Azure Cosmos DB description: Learn how to find the request unit (RU) charge for Gremlin queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java drivers to find the RU charge. - Last updated 10/14/2020-++ # Find the request unit charge for operations executed in Azure Cosmos DB Gremlin API
cosmos-db Find Request Unit Charge Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-mongodb.md
Title: Find request unit charge for Azure Cosmos DB API for MongoDB operations description: Learn how to find the request unit (RU) charge for MongoDB queries executed against an Azure Cosmos container. You can use the Azure portal, MongoDB .NET, Java, Node.js drivers.-++ Last updated 03/19/2021-
cosmos-db Graph Execution Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-execution-profile.md
Title: Use the execution profile to evaluate queries in Azure Cosmos DB Gremlin API description: Learn how to troubleshoot and improve your Gremlin queries using the execution profile step. - Last updated 03/27/2019-++
cosmos-db Graph Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-introduction.md
Title: 'Introduction to Azure Cosmos DB Gremlin API' description: Learn how you can use Azure Cosmos DB to store, query, and traverse massive graphs with low latency by using the Gremlin graph query language of Apache TinkerPop.- Last updated 03/22/2021-++ # Introduction to Gremlin API in Azure Cosmos DB [!INCLUDE[appliesto-gremlin-api](includes/appliesto-gremlin-api.md)]
cosmos-db Graph Modeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-modeling.md
Title: 'Graph data modeling for Azure Cosmos DB Gremlin API' description: Learn how to model a graph database by using Azure Cosmos DB Gremlin API. This article describes when to use a graph database and best practices to model entities and relationships. - Last updated 12/02/2019-++ # Graph data modeling for Azure Cosmos DB Gremlin API
cosmos-db Graph Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-partitioning.md
Title: Data partitioning in Azure Cosmos DB Gremlin API description: Learn how you can use a partitioned graph in Azure Cosmos DB. This article also describes the requirements and best practices for a partitioned graph.--++
cosmos-db Graph Visualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-visualization.md
Title: Visualize your graph data in Azure Cosmos DB Gremlin API description: Learn how to integrate Azure Cosmos DB graph data with visualization solutions (Linkurious Enterprise, Cambridge Intelligence).--++
cosmos-db Gremlin Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/gremlin-headers.md
Last updated 09/03/2019--++ # Azure Cosmos DB Gremlin server response headers
cosmos-db Gremlin Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/gremlin-limits.md
Title: Limits of Azure Cosmos DB Gremlin description: Reference documentation for runtime limitations of Graph engine- Last updated 10/04/2019-++ # Azure Cosmos DB Gremlin limits
cosmos-db Gremlin Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/gremlin-support.md
Title: Azure Cosmos DB Gremlin support and compatibility with TinkerPop features description: Learn about the Gremlin language from Apache TinkerPop. Learn which features and steps are available in Azure Cosmos DB and the TinkerPop Graph engine compatibility differences.- Previously updated : 11/11/2020- Last updated : 07/06/2021++ # Azure Cosmos DB Gremlin graph support and compatibility with TinkerPop features
The properties used by the JSON format for vertices are described below:
| `_partition` | The partition key of the vertex. Used for [graph partitioning](graph-partitioning.md). | | `outE` | This property contains a list of out edges from a vertex. Storing the adjacency information with vertex allows for fast execution of traversals. Edges are grouped based on their labels. |
-And the edge contains the following information to help with navigation to other parts of the graph.
+Each property can store multiple values within an array.
| Property | Description | | | |
-| `id` | The ID for the edge. Must be unique (in combination with the value of `_partition` if applicable) |
-| `label` | The label of the edge. This property is optional, and used to describe the relationship type. |
-| `inV` | This property contains a list of in vertices for an edge. Storing the adjacency information with the edge allows for fast execution of traversals. Vertices are grouped based on their labels. |
-| `properties` | Bag of user-defined properties associated with the edge. Each property can have multiple values. |
+| `value` | The value of the property |
-Each property can store multiple values within an array.
+And the edge contains the following information to help with navigation to other parts of the graph.
| Property | Description | | | |
-| `value` | The value of the property
+| `id` | The ID for the edge. Must be unique (in combination with the value of `_partition` if applicable) |
+| `label` | The label of the edge. This property is optional, and used to describe the relationship type. |
+| `inV` | This property contains a list of in vertices for an edge. Storing the adjacency information with the edge allows for fast execution of traversals. Vertices are grouped based on their labels. |
+| `properties` | Bag of user-defined properties associated with the edge. |
## Gremlin steps
cosmos-db How To Access System Properties Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-access-system-properties-gremlin.md
Last updated 09/10/2019--++ # System document properties
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-always-encrypted.md
Title: Use client-side encryption with Always Encrypted for Azure Cosmos DB description: Learn how to use client-side encryption with Always Encrypted for Azure Cosmos DB- Last updated 05/25/2021 + # Use client-side encryption with Always Encrypted for Azure Cosmos DB (Preview)
cosmos-db How To Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-firewall.md
Title: Configure an IP firewall for your Azure Cosmos DB account description: Learn how to configure IP access control policies for firewall support on Azure Cosmos accounts.- Last updated 03/03/2021-++
cosmos-db How To Create Container Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-cassandra.md
Title: Create a container in Azure Cosmos DB Cassandra API description: Learn how to create a container in Azure Cosmos DB Cassandra API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -++ Last updated 10/16/2020-
cosmos-db How To Create Container Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-gremlin.md
Title: Create a container in Azure Cosmos DB Gremlin API description: Learn how to create a container in Azure Cosmos DB Gremlin API by using Azure portal, .NET and other SDKs. - Last updated 10/16/2020-++
cosmos-db How To Create Container Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-mongodb.md
Title: Create a container in Azure Cosmos DB API for MongoDB description: Learn how to create a container in Azure Cosmos DB API for MongoDB by using Azure portal, .NET, Java, Node.js, and other SDKs. - Last updated 10/16/2020-++
cosmos-db How To Provision Throughput Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-throughput-cassandra.md
Title: Provision throughput on Azure Cosmos DB Cassandra API resources description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB Cassandra API resources. You will use Azure portal, CLI, PowerShell and various other SDKs. -++ Last updated 10/15/2020-
cosmos-db How To Provision Throughput Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-throughput-gremlin.md
Title: Provision throughput on Azure Cosmos DB Gremlin API resources description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB Gremlin API resources. You will use Azure portal, CLI, PowerShell and various other SDKs. - Last updated 10/15/2020-++
cosmos-db How To Provision Throughput Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-throughput-mongodb.md
Title: Provision throughput on Azure Cosmos DB API for MongoDB resources description: Learn how to provision container, database, and autoscale throughput in Azure Cosmos DB API for MongoDB resources. You will use Azure portal, CLI, PowerShell and various other SDKs. - Last updated 10/15/2020-++
cosmos-db How To Use Regional Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-use-regional-gremlin.md
Title: Regional endpoints for Azure Cosmos DB Graph database description: Learn how to connect to nearest Graph database endpoint for your application--++
cosmos-db How To Use Resource Tokens Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-use-resource-tokens-gremlin.md
Title: Use Azure Cosmos DB resource tokens with the Gremlin SDK description: Learn how to create resource tokens and use them to access the Graph database. --++
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/integrated-cache.md
The integrated cache supports eventual [consistency](consistency-levels.md) only
The easiest way to configure eventual consistency for all reads is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). However, if you would only like some of your reads to have eventual consistency, you can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level).
-## Integrated cache retention time
+## MaxIntegratedCacheStaleness
-The cache retention time is the maximum retention for cached data. You can set the cache retention time by configuring the `MaxIntegratedCacheStaleness` for each request.
+The `MaxIntegratedCacheStaleness` is the maximum acceptable staleness for cached point reads and queries. The `MaxIntegratedCacheStaleness` is configurable at the request-level. For example, if you set a `MaxIntegratedCacheStaleness` of 2 hours, your request will only return cached data if the data is less than 2 hours old. To increase the likelihood of repeated reads utilizing the integrated cache, you should set the `MaxIntegratedCacheStaleness` as high as your business requirements allow.
-Your `MaxIntegratedCacheStaleness` is the maximum time in which you are willing to tolerate stale cached data. For example, if you set a `MaxIntegratedCacheStaleness` of 2 hours, your request will only return cached data if the data is less than 2 hours old. To increase the likelihood of repeated reads utilizing the integrated cache, you should set the `MaxIntegratedCacheStaleness` as high as your business requirements allow.
+It's important to understand that the `MaxIntegratedCacheStaleness`, when configured on a request that ends up populating the cache, doesn't impact how long that request will be cached. `MaxIntegratedCacheStaleness` enforces consistency when you try to use cached data. There's no global TTL or cache retention setting, so data will only be evicted from the cache if either the integrated cache is full or a new read is run with a lower `MaxIntegratedCacheStaleness` than the age of the current cached entry.
+
+This is an improvement from how most caches work and allows the following additional customization:
+
+- You can set different staleness requirements for each point read or query
+- Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values.
+- If you wanted to modify read consistency when using cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency.
> [!NOTE] > When not explicitly configured, the MaxIntegratedCacheStaleness defaults to 5 minutes.
cosmos-db Local Emulator Export Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-export-ssl-certificates.md
description: Learn how to export the Azure Cosmos DB Emulator certificate for us
Last updated 09/17/2020--++
cosmos-db Mongodb Change Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-change-streams.md
Title: Change streams in Azure Cosmos DBΓÇÖs API for MongoDB description: Learn how to use change streams n Azure Cosmos DBΓÇÖs API for MongoDB to get the changes made to your data.- Last updated 03/02/2021-++
cosmos-db Mongodb Compass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-compass.md
Last updated 06/05/2020--++ # Use MongoDB Compass to connect to Azure Cosmos DB's API for MongoDB
cosmos-db Mongodb Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-consistency.md
Title: Mapping consistency levels for Azure Cosmos DB API for MongoDB description: Mapping consistency levels for Azure Cosmos DB API for MongoDB.--++
cosmos-db Mongodb Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-custom-commands.md
Title: MongoDB extension commands to manage data in Azure Cosmos DBΓÇÖs API for MongoDB description: This article describes how to use MongoDB extension commands to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. -++ Last updated 03/02/2021-
cosmos-db Mongodb Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support-36.md
Last updated 03/02/2021--++ # Azure Cosmos DB's API for MongoDB (3.6 version): supported features and syntax
cosmos-db Mongodb Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support.md
Last updated 10/16/2019--++ # Azure Cosmos DB's API for MongoDB (3.2 version): supported features and syntax
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-indexing.md
ms.devlang: nodejs Last updated 03/02/2021--++ # Manage indexing in Azure Cosmos DB's API for MongoDB
cosmos-db Mongodb Migrate Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-migrate-databricks.md
+
+ Title: Migrate from MongoDB to Azure Cosmos DB API for MongoDB, using Databricks and Spark
+description: Learn how to use Databricks Spark to migrate large datasets from MongoDB instances to Azure Cosmos DB.
+++++ Last updated : 06/29/2021+
+# Migrate data from MongoDB to an Azure Cosmos DB API for MongoDB account by using Azure Databricks
+
+This migration guide is part of series on migrating databases from MongoDB to Azure CosmosDB API for MongoDB. The critical migration steps are [pre-migration](mongodb-pre-migration.md), migration, and [post-migration](mongodb-post-migration.md), as shown below.
+++
+## Data migration using Azure Databricks
+
+[Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/). It offers a way to do offline migrations on a large-scale dataset. You can use Azure Databricks to do an offline migration of databases from MongoDB to Azure Cosmos DB API for MongoDB.
+
+In this tutorial, you will learn how to:
+
+- Provision an Azure Databricks cluster
+
+- Add dependencies
+
+- Create and run Scala or Python notebook
+
+- Optimize the migration performance
+
+- Troubleshoot rate-limiting errors that may be observed during migration
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+- [Complete the pre-migration](mongodb-pre-migration.md) steps such as estimating throughput and choosing a shard key.
+- [Create an Azure Cosmos DB API for MongoDB account](https://ms.portal.azure.com/#create/Microsoft.DocumentDB).
+
+## Provision an Azure Databricks cluster
+
+You can follow instructions to [provision an Azure Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). We recommend selecting Databricks runtime version 7.6, which supports Spark 3.0.
+++
+## Add dependencies
+
+Add the MongoDB Connector for Spark library to your cluster to connect to both native MongoDB and Azure Cosmos DB API for MongoDB endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `org.mongodb.spark:mongo-spark-connector_2.12:3.0.1` Maven coordinates.
+++
+Select **Install**, and then restart the cluster when installation is complete.
+
+> [!NOTE]
+> Make sure that you restart the Databricks cluster after the MongoDB Connector for Spark library has been installed.
+
+After that, you may create a Scala or Python notebook for migration.
++
+## Create Scala notebook for migration
+
+Create a Scala Notebook in Databricks. Make sure to enter the right values for the variables before running the following code:
++
+```scala
+import com.mongodb.spark._
+import com.mongodb.spark.config._
+import org.apache.spark._
+import org.apache.spark.sql._
+
+var sourceConnectionString = "mongodb://<USERNAME>:<PASSWORD>@<HOST>:<PORT>/<AUTHDB>"
+var sourceDb = "<DBNAME>"
+var sourceCollection = "<COLLECTIONNAME>"
+var targetConnectionString = "mongodb://<ACCOUNTNAME>:<PASSWORD>@<ACCOUNTNAME>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<ACCOUNTNAME>@"
+var targetDb = "<DBNAME>"
+var targetCollection = "<COLLECTIONNAME>"
+
+val readConfig = ReadConfig(Map(
+ "spark.mongodb.input.uri" -> sourceConnectionString,
+ "spark.mongodb.input.database" -> sourceDb,
+ "spark.mongodb.input.collection" -> sourceCollection,
+))
+
+val writeConfig = WriteConfig(Map(
+ "spark.mongodb.output.uri" -> targetConnectionString,
+ "spark.mongodb.output.database" -> targetDb,
+ "spark.mongodb.output.collection" -> targetCollection,
+ "spark.mongodb.output.maxBatchSize" -> "8000"
+))
+
+val sparkSession = SparkSession
+ .builder()
+ .appName("Data transfer using spark")
+ .getOrCreate()
+
+val customRdd = MongoSpark.load(sparkSession, readConfig)
+
+MongoSpark.save(customRdd, writeConfig)
+```
+
+## Create Python notebook for migration
+
+Create a Python Notebook in Databricks. Make sure to enter the right values for the variables before running the following code:
++
+```python
+from pyspark.sql import SparkSession
+
+sourceConnectionString = "mongodb://<USERNAME>:<PASSWORD>@<HOST>:<PORT>/<AUTHDB>"
+sourceDb = "<DBNAME>"
+sourceCollection = "<COLLECTIONNAME>"
+targetConnectionString = "mongodb://<ACCOUNTNAME>:<PASSWORD>@<ACCOUNTNAME>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<ACCOUNTNAME>@"
+targetDb = "<DBNAME>"
+targetCollection = "<COLLECTIONNAME>"
+
+my_spark = SparkSession \
+ .builder \
+ .appName("myApp") \
+ .getOrCreate()
+
+df = my_spark.read.format("com.mongodb.spark.sql.DefaultSource").option("uri", sourceConnectionString).option("database", sourceDb).option("collection", sourceCollection).load()
+
+df.write.format("mongo").mode("append").option("uri", targetConnectionString).option("maxBatchSize",2500).option("database", targetDb).option("collection", targetCollection).save()
+```
+
+## Optimize the migration performance
+
+The migration performance can be adjusted through these configurations:
+
+- **Number of workers and cores in the Spark cluster**: More workers mean more compute nodes to execute tasks.
+
+- **maxBatchSize**: The `maxBatchSize` value controls the rate at which data is saved to the target Azure Cosmos DB collection. However, if the maxBatchSize is too high for the collection throughput, it can cause [rate limiting](prevent-rate-limiting-errors.md) errors.
+
+ You would need to adjust the number of workers and maxBatchSize, depending on the number of executors in the Spark cluster, potentially the size (and that's why RU cost) of each document being written, and the target collection throughput limits.
+
+ >[!TIP]
+ >maxBatchSize = Collection throughput / ( RU cost for 1
+ document \* number of Spark workers \* number of CPU cores per worker )
+
+- **MongoDB Spark partitioner and partitionKey**: The default partitioner used is MongoDefaultPartitioner and default partitionKey is _id. Partitioner can be changed by assigning value `MongoSamplePartitioner` to the input configuration property `spark.mongodb.input.partitioner`. Similarly, partitionKey can be changed by assigning the appropriate field name to the input configuration property `spark.mongodb.input.partitioner.partitionKey`. Right partitionKey can help avoid data skew (large number of records being written for the same shard key value).
+
+- **Disable indexes during data transfer:** For large amounts of data migration, consider disabling indexes, specially wildcard index on the target collection. Indexes increase the RU cost for writing each document. Freeing these RUs can help improve the data transfer rate. You may enable the indexes once the data has been migrated over.
+++
+## Troubleshoot
+
+### Timeout Error (Error code 50)
+You might see a 50 error code for operations against the Cosmos DB API for MongoDB database. The following scenarios can cause timeout errors:
+
+- **Throughput allocated to the database is low**: Ensure that the target collection has sufficient throughput assigned to it.
+- **Excessive data skew with large data volume**. If you have a large amount of data to migrate into a given table but have a significant skew in the data, you might still experience rate limiting even if you have several [request units](request-units.md) provisioned in your table. Request units are divided equally among physical partitions, and heavy data skew can cause a bottleneck of requests to a single shard. Data skew means large number of records for the same shard key value.
+
+### Rate limiting (Error code 16500)
+
+You might see a 16500 error code for operations against the Cosmos DB API for MongoDB database. These are rate limiting errors and may be observed on older accounts or accounts where server-side retry feature is disabled.
+- **Enable Server-side retry**: Enable the Server Side Retry (SSR) feature and let the server retry the rate limited operations automatically.
+++
+## Post-migration optimization
+
+After you migrate the data, you can connect to Azure Cosmos DB and manage the data. You can also follow other post-migration steps such as optimizing the indexing policy, update the default consistency level, or configure global distribution for your Azure Cosmos DB account. For more information, see the [Post-migration optimization](mongodb-post-migration.md) article.
+
+## Next steps
+
+* [Manage indexing in Azure Cosmos DB's API for MongoDB](mongodb-indexing.md)
+
+* [Find the request unit charge for operations](find-request-unit-charge-mongodb.md)
cosmos-db Mongodb Mongochef https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-mongochef.md
Last updated 03/20/2020--++ # Connect to an Azure Cosmos account using Studio 3T
cosmos-db Mongodb Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-mongoose.md
ms.devlang: nodejs Last updated 03/20/2020--++ # Connect a Node.js Mongoose application to Azure Cosmos DB
cosmos-db Mongodb Post Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-post-migration.md
Title: Post-migration optimization steps with Azure Cosmos DB's API for MongoDB description: This doc provides the post-migration optimization techniques from MongoDB to Azure Cosmos DB's APi for Mongo DB.- Last updated 05/19/2021-++
cosmos-db Mongodb Pre Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-pre-migration.md
Title: Pre-migration steps for data migration to Azure Cosmos DB's API for MongoDB description: This doc provides an overview of the prerequisites for a data migration from MongoDB to Cosmos DB.- Last updated 05/17/2021-++ # Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB's API for MongoDB
cosmos-db Mongodb Readpreference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-readpreference.md
Title: Use Read preference with the Azure Cosmos DB's API for MongoDB description: Learn how to use MongoDB Read Preference with the Azure Cosmos DB's API for MongoDB--++ ms.devlang: nodejs
cosmos-db Mongodb Robomongo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-robomongo.md
Last updated 03/23/2020--++ # Use Robo 3T with Azure Cosmos DB's API for MongoDB
cosmos-db Mongodb Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-samples.md
ms.devlang: nodejs Last updated 12/26/2018--++ # Build an app using Node.js and Azure Cosmos DB's API for MongoDB
cosmos-db Mongodb Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-time-to-live.md
Title: MongoDB per-document TTL feature in Azure Cosmos DB description: Learn how to set time to live value for documents using Azure Cosmos DB's API for MongoDB to automatically purge them from the system after a period of time.--++ ms.devlang: javascript
cosmos-db Mongodb Troubleshoot Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-troubleshoot-query.md
Title: Troubleshoot query issues when using the Azure Cosmos DB API for MongoDB description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB's API for MongoDB query issues.- Last updated 03/02/2021-++
cosmos-db Mongodb Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-troubleshoot.md
Title: Troubleshoot common errors in Azure Cosmos DB's API for Mongo DB description: This doc discusses the ways to troubleshoot common issues encountered in Azure Cosmos DB's API for MongoDB.- Last updated 07/15/2020-++
cosmos-db Mongodb Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-version-upgrade.md
Title: Upgrade the Mongo version of your Azure Cosmos DB's API for MongoDB account description: How to upgrade the MongoDB wire-protocol version for your existing Azure Cosmos DB's API for MongoDB accounts seamlessly- Last updated 03/19/2021-++
cosmos-db Monitor Normalized Request Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-normalized-request-units.md
Title: Monitor normalized RU/s for an Azure Cosmos container or an account
description: Learn how to monitor the normalized request unit usage of an operation in Azure Cosmos DB. Owners of an Azure Cosmos DB account can understand which operations are consuming more request units. --++ Last updated 01/07/2021
cosmos-db Monitor Request Unit Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-request-unit-usage.md
Title: Monitor the throughput usage of an operation in Azure Cosmos DB
description: Learn how to monitor the throughput or request unit usage of an operation in Azure Cosmos DB. Owners of an Azure Cosmos DB account can understand which operations are taking more request units. --++ Last updated 04/09/2020
cosmos-db Monitor Server Side Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-server-side-latency.md
Title: How to monitor the server-side latency for operations in Azure Cosmos DB
description: Learn how to monitor server latency for operations in Azure Cosmos DB account or a container. Owners of an Azure Cosmos DB account can understand the server-side latency issues with your Azure Cosmos accounts. --++ Last updated 04/07/2020
cosmos-db Queries Mongo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/queries-mongo.md
Title: Troubleshoot issues with advanced diagnostics queries for Mongo API description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Mongo API- Last updated 06/12/2021-++ # Troubleshoot issues with advanced diagnostics queries for Mongo API
cosmos-db Troubleshoot Local Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-local-emulator.md
Title: Troubleshoot issues when using the Azure Cosmos DB Emulator
description: Learn how to troubleshot service unavailable, certificate, encryption, and versioning issues when using the Azure Cosmos DB Emulator. --++ Last updated 09/17/2020
cosmos-db Tutorial Global Distribution Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-global-distribution-mongodb.md
Title: 'Tutorial to set up global distribution with Azure Cosmos DB API for MongoDB' description: Learn how to set up global distribution using Azure Cosmos DB's API for MongoDB.--++
cosmos-db Tutorial Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-query-graph.md
Title: How to query graph data in Azure Cosmos DB? description: Learn how to query graph data from Azure Cosmos DB using Gremlin queries--++
cosmos-db Tutorial Query Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-query-mongodb.md
Title: Query data with Azure Cosmos DB's API for MongoDB description: Learn how to query data from Azure Cosmos DB's API for MongoDB by using MongoDB shell commands--++
cosmos-db Tutorial Setup Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-setup-ci-cd.md
Title: Set up CI/CD pipeline with Azure Cosmos DB Emulator build task description: Tutorial on how to set up build and release workflow in Azure DevOps using the Cosmos DB emulator build task- Last updated 01/28/2020-++
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-metrics.md
Title: Monitor and debug with metrics in Azure Cosmos DB description: Use metrics in Azure Cosmos DB to debug common issues and monitor the database.--++
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/manage-automation.md
GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDe
For modern customers with a Microsoft Customer Agreement, use the following call: ```http
-GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?startDate=2020-08-01&endDate=&2020-08-05$top=1000&api-version=2019-10-01
+GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?startDate=2020-08-01&endDate=2020-08-05$top=1000&api-version=2019-10-01
``` ### Get amortized cost details
cost-management-billing Track Consumption Commitment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/track-consumption-commitment.md
tags: billing
Previously updated : 06/30/2021 Last updated : 07/06/2021
The Microsoft Azure Consumption Commitment (MACC) is a contractual commitment th
:::image type="content" source="./media/track-consumption-commitment/select-macc-tab.png" alt-text="Screenshot that shows selecting the MACC tab." lightbox="./media/track-consumption-commitment/select-macc-tab.png" ::: 5. The Microsoft Azure Consumption Commitment (MACC) tab has the following sections.
-#### Remaining Commitment
+#### Remaining commitment
The remaining commitment displays the remaining commitment amount since your last invoice.
The API response returns a list of billing accounts.
"name": "9a12f056-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx", "type": "Microsoft.Billing/billingAccounts", "properties": {
- "displayName": "Connie Wilson",
+ "displayName": "Kayla Lewis",
"agreementType": "MicrosoftCustomerAgreement", "accountStatus": "Active", "accountType": "Individual",
The API response returns a list of billing accounts.
Use the `displayName` property of the billing account to identify the billing account for which you want to track MACC. Copy the `name` of the billing account. For example, if you want to track MACC for **Contoso** billing account, you'd copy `9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`. Paste this value somewhere so that you can use it in the next step.
-### Get list of Microsoft Azure Consumption Commitments
+### Get a list of Microsoft Azure Consumption Commitments
Make the following request, replacing `<billingAccountName>` with the `name` copied in the first step (`9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`).
The API response returns all events that affected your MACC commitment.
-## Azure Services and Marketplace Offers that are eligible for MACC
+## Azure Services and Marketplace offers that are eligible for MACC
You can determine which Azure services and Marketplace offers are eligible for MACC decrement in the Azure portal. For more information, see [Determine which offers are eligible for Azure consumption commitments (MACC/CtC)](/marketplace/azure-consumption-commitment-benefit#determine-which-offers-are-eligible-for-azure-consumption-commitments-maccctc).
-## Azure Credits and MACC
+## Azure credits and MACC
If your organization received Azure credits from Microsoft, the consumption or purchases that are covered by credits won't contribute towards your MACC commitment.
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-troubleshooting.md
Activity execution time varies when the dataset is based on different Integratio
- WriteBatchSize is not large enough to fit schema row size. Try to enlarge the property for the issue.
- - Instead of bulk inset, stored procedure is being used, which is expected to have worse performance.
+ - Instead of bulk insert, stored procedure is being used, which is expected to have worse performance.
### Timeout or slow performance when parsing large Excel file
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool-metadata-driven.md
+
+ Title: Build large-scale data copy pipelines with metadata-driven approach in copy data tool
+description: 'Provides information about the metadata-driven approach in ADF copy data tool'
++++ Last updated : 06/19/2021++
+# Build large-scale data copy pipelines with metadata-driven approach in copy data tool (Preview)
+
+When you want to copy huge amounts of objects (for example, thousands of tables) or load data from large variety of sources, the appropriate approach is to input the name list of the objects with required copy behaviors in a control table, and then use parameterized pipelines to read the same from the control table and apply them to the jobs accordingly. By doing so, you can maintain (for example, add/remove) the objects list to be copied easily by just updating the object names in control table instead of redeploying the pipelines. WhatΓÇÖs more, you will have single place to easily check which objects copied by which pipelines/triggers with defined copy behaviors.
+
+Copy data tool in ADF eases the journey of building such metadata driven data copy pipelines. After you go through an intuitive flow from a wizard-based experience, the tool can generate parameterized pipelines and SQL scripts for you to create external control tables accordingly. After you run the generated scripts to create the control table in your SQL database, your pipelines will read the metadata from the control table and apply them on the copy jobs automatically.
+
+## Create metadata-driven copy jobs from copy data tool
+
+1. Select **Metadata-driven copy task** in copy data tool.
+
+ You need to input the connection and table name of your control table, so that the generated pipeline will read metadata from that.
+
+ ![Select task type](./media/copy-data-tool-metadata-driven/select-task-type.png)
+
+2. Input the **connection of your source database**. You can use [parameterized linked service](parameterize-linked-services.md) as well.
+
+ ![Select parameterized linked service](./media/copy-data-tool-metadata-driven/select-parameterized-linked-service.png)
+
+3. Select the **table name** to copy.
+
+ ![Select table](./media/copy-data-tool-metadata-driven/select-table.png)
+
+ > [!NOTE]
+ > If you select tabular data store, you will have chance to further select either full load or incremental load in the next page. If you select storage store, you can further select full load only in the next page. Incrementally loading new files only from storage store is currently not supported.
+
+4. Choose **loading behavior**.
+ >[!TIP]
+ >If you want to do full copy on all the tables, select **Full load all tables**. If you want to do incremental copy, you can select **configure for each table individually**, and select **Delta load** as well as watermark column name & value to start for each table.
+
+5. Select **Destination data store**.
+
+6. In **Settings** page, You can decide the max number of copy activities to copy data from your source store concurrently via **Number of concurrent copy tasks**. The default value is 20.
+
+ ![Settings page](./media/copy-data-tool-metadata-driven/settings.png)
+
+7. After pipeline deployment, you can copy or download the SQL scripts from UI for creating control table and store procedure.
+
+ ![Download scripts](./media/copy-data-tool-metadata-driven/download-scripts.png)
+
+ You will see two SQL scripts.
+
+ - The first SQL script is used to create two control tables. The main control table stores the table list, file path or copy behaviors. The connection control table stores the connection value of your data store if you used parameterized linked service.
+ - The second SQL script is used to create a store procedure. It is used to update the watermark value in main control table when the incremental copy jobs complete every time.
+
+8. Open **SSMS** to connect to your control table server, and run the two SQL scripts to create control tables and store procedure.
+
+ ![Create control table script](./media/copy-data-tool-metadata-driven/create-control-table-script.png)
+
+9. Query the main control table and connection control table to review the metadata in it.
+
+ **Main control table**
+ ![Query control table script1](./media/copy-data-tool-metadata-driven/query-control-table.png)
+
+ **Connection control table**
+ ![Query control table script2](./media/copy-data-tool-metadata-driven/query-connection-control-table.png)
+
+10. Go back to ADF portal to view and debug pipelines. You will see a folder created by naming "MetadataDrivenCopyTask_###_######". **Click** the pipeline naming with "MetadataDrivenCopyTask_###_TopLevel" and click **debug run**.
+
+ You are required to input the following parameters:
+
+ | Parameters name | Description |
+ |: |: |
+ |MaxNumberOfConcurrentTasks |You can always change the max number of concurrent copy activities run before pipeline run. The default value will be the one you input in copy data tool. |
+ |MainControlTableName | You can always change the main control table name, so the pipeline will get the metadata from that table before run. |
+ |ConnectionControlTableName |You can always change the connection control table name (optional), so the pipeline will get the metadata related to data store connection before run. |
+ |MaxNumberOfObjectsReturnedFromLookupActivity |In order to avoid reaching the limit of output lookup activity, there is a way to define the max number of objects returned by lookup activity. In most cases, the default value is not required to be changed. |
+ |windowStart |When you input dynamic value (for example, yyyy/mm/dd) as folder path, the parameter is used to pass the current trigger time to pipeline in order to fill the dynamic folder path. When the pipeline is triggered by schedule trigger or tumbling windows trigger, users do not need to input the value of this parameter. Sample value: 2021-01-25T01:49:28Z |
+
+
+11. Enable the trigger to operationalize the pipelines.
+
+ ![Enable trigger](./media/copy-data-tool-metadata-driven/enable-trigger.png)
++
+## Update control table by copy data tool
+You can always directly update the control table by adding or removing the object to be copied or changing the copy behavior for each table. We also create UI experience in copy data tool to ease the journey of editing the control table.
+
+1. Right-click the top-level pipeline: **MetadataDrivenCopyTask_xxx_TopLevel**, and then select **Edit control table**.
+
+ ![Edit control table1](./media/copy-data-tool-metadata-driven/edit-control-table.png)
+
+2. Select rows from the control table to edit.
+
+ ![Edit control table2](./media/copy-data-tool-metadata-driven/edit-control-table-select-tables.png)
+
+3. Go throughput the copy data tool, and it will come up with a new SQL script for you. Rerun the SQL script to update your control table.
+
+ ![Edit control table3](./media/copy-data-tool-metadata-driven/edit-control-table-create-script.png)
+
+ > [!NOTE]
+ > The pipeline will NOT be redeployed. The new created SQL script help you to update the control table only.
+
+## Control tables
+
+### Main control table
+Each row in control table contains the metadata for one object (for example, one table) to be copied.
+
+| Column name | Description |
+|: |: |
+| Id | Unique ID of the object to be copied. |
+| SourceObjectSettings | Metadata of source dataset. It can be schema name, table name etc. Here is an [example](connector-azure-sql-database.md#dataset-properties). |
+| SourceConnectionSettingsName | The name of the source connection setting in connection control table. It is optional. |
+| CopySourceSettings | Metadata of source property in copy activity. It can be query, partitions etc. Here is an [example](connector-azure-sql-database.md#azure-sql-database-as-the-source). |
+| SinkObjectSettings | Metadata of destination dataset. It can be file name, folder path, table name etc. Here is an [example](connector-azure-data-lake-storage.md#azure-data-lake-storage-gen2-as-a-sink-type). If dynamic folder path specified, the variable value will not be written here in control table. |
+| SinkConnectionSettingsName | The name of the destination connection setting in connection control table. It is optional. |
+| CopySinkSettings | Metadata of sink property in copy activity. It can be preCopyScript, tableOption etc. Here is an [example](connector-azure-sql-database.md#azure-sql-database-as-the-sink). |
+| CopyActivitySettings | Metadata of translator property in copy activity. It is used to define column mapping. |
+| TopLevelPipelineName | Top Pipeline name, which can copy this object. |
+| TriggerName | Trigger name, which can trigger the pipeline to copy this object. |
+| DataLoadingBehaviorSettings |Full load vs. delta load. |
+| TaskId | The order of objects to be copied following the TaskId in control table (ORDER BY [TaskId] DESC). If you have huge amounts of objects to be copied but only limited concurrent number of copied allowed, you can change the TaskId for each object to decide which objects can be copied earlier. The default value is 0. |
+
+### Connection control table
+Each row in control table contains one connection setting for the data store.
+
+| Column name | Description |
+|: |: |
+| Name | Name of the parameterized connection in main control table. |
+| ConnectionSettings | The connection settings. It can be DB name, Server name and so on. |
+
+## Pipelines
+You will see three levels of pipelines are generated by copy data tool.
+
+### MetadataDrivenCopyTask_xxx_TopLevel
+This pipeline will calculate the total number of objects (tables etc.) required to be copied in this run, come up with the number of sequential batches based on the max allowed concurrent copy task, and then execute another pipeline to copy different batches sequentially.
+
+#### Parameters
+| Parameters name | Description |
+|: |: |
+| MaxNumberOfConcurrentTasks | You can always change the max number of concurrent copy activities run before pipeline run. The default value will be the one you input in copy data tool. |
+| MainControlTableName | The table name of main control table. The pipeline will get the metadata from this table before run |
+| ConnectionControlTableName | The table name of connection control table (optional). The pipeline will get the metadata related to data store connection before run |
+| MaxNumberOfObjectsReturnedFromLookupActivity | In order to avoid reaching the limit of output lookup activity, there is a way to define the max number of objects returned by lookup activity. In most cases, the default value is not required to be changed. |
+| windowStart | When you input dynamic value (for example, yyyy/mm/dd) as folder path, the parameter is used to pass the current trigger time to pipeline in order to fill the dynamic folder path. When the pipeline is triggered by schedule trigger or tumbling windows trigger, users do not need to input the value of this parameter. Sample value: 2021-01-25T01:49:28Z |
+
+#### Activities
+| Activity name | Activity type | Description |
+|: |: |: |
+| GetSumOfObjectsToCopy | Lookup | Calculate the total number of objects (tables etc.) required to be copied in this run. |
+| CopyBatchesOfObjectsSequentially | ForEach | Come up with the number of sequential batches based on the max allowed concurrent copy tasks, and then execute another pipeline to copy different batches sequentially. |
+| CopyObjectsInOneBtach | Execute Pipeline | Execute another pipeline to copy one batch of objects. The objects belonging to this batch will be copied in parallel. |
++
+### MetadataDrivenCopyTask_xxx_ MiddleLevel
+This pipeline will copy one batch of objects. The objects belonging to this batch will be copied in parallel.
+
+#### Parameters
+| Parameters name | Description |
+|: |: |
+| MaxNumberOfObjectsReturnedFromLookupActivity | In order to avoid reaching the limit of output lookup activity, there is a way to define the max number of objects returned by lookup activity. In most case, the default value is not required to be changed. |
+| TopLayerPipelineName | The name of top layer pipeline. |
+| TriggerName | The name of trigger. |
+| CurrentSequentialNumberOfBatch | The ID of sequential batch. |
+| SumOfObjectsToCopy | The total number of objects to copy. |
+| SumOfObjectsToCopyForCurrentBatch | The number of objects to copy in current batch. |
+| MainControlTableName | The name of main control table. |
+| ConnectionControlTableName | The name of connection control table. |
+
+#### Activities
+| Activity name | Activity type | Description |
+|: |: |: |
+| DivideOneBatchIntoMultipleGroups | ForEach | Divide objects from single batch into multiple parallel groups to avoid reaching the output limit of lookup activity. |
+| GetObjectsPerGroupToCopy | Lookup | Get objects (tables etc.) from control table required to be copied in this group. The order of objects to be copied following the TaskId in control table (ORDER BY [TaskId] DESC). |
+| CopyObjectsInOneGroup | Execute Pipeline | Execute another pipeline to copy objects from one group. The objects belonging to this group will be copied in parallel. |
++
+### MetadataDrivenCopyTask_xxx_ BottomLevel
+This pipeline will copy objects from one group. The objects belonging to this group will be copied in parallel.
+
+#### Parameters
+| Parameters name | Description |
+|: |: |
+| ObjectsPerGroupToCopy | The number of objects to copy in current group. |
+| ConnectionControlTableName | The name of connection control table. |
+| windowStart | It used to pass the current trigger time to pipeline in order to fill the dynamic folder path if configured by user. |
+
+#### Activities
+| Activity name | Activity type | Description |
+|: |: |: |
+| ListObjectsFromOneGroup | ForEach | List objects from one group and iterate each of them to downstream activities. |
+| RouteJobsBasedOnLoadingBehavior | Switch | Check the loading behavior for each object. If it is default or FullLoad case, do full load. If it is DeltaLoad case, do incremental load via watermark column to identify changes |
+| FullLoadOneObject | Copy | Take a full snapshot on this object and copy it to the destination. |
+| DeltaLoadOneObject | Copy | Copy the changed data only from last time via comparing the value in watermark column to identify changes. |
+| GetMaxWatermarkValue | Lookup | Query the source object to get the max value from watermark column. |
+| UpdateWatermarkColumnValue | StoreProcedure | Write back the new watermark value to control table to be used next time. |
+
+### Known limitations
+- Copy data tool does not support metadata driven ingestion for incrementally copying new files only currently. But you can bring your own parameterized pipelines to achieve that.
+- IR name, database type, file format type cannot be parameterized in ADF. For example, if you want to ingest data from both Oracle Server and SQL Server, you will need two different parameterized pipelines. But the single control table can be shared by two sets of pipelines.
+++
+## Next steps
+Try these tutorials that use the Copy Data tool:
+
+- [Quickstart: create a data factory using the Copy Data tool](quickstart-create-data-factory-copy-data-tool.md)
+- [Tutorial: copy data in Azure using the Copy Data tool](tutorial-copy-data-tool.md)
+- [Tutorial: copy on-premises data to Azure using the Copy Data tool](tutorial-hybrid-copy-data-tool.md)
data-factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-movement-security-considerations.md
Last updated 05/03/2021
> * [Version 1](v1/data-factory-data-movement-security-considerations.md) > * [Current version](data-movement-security-considerations.md)
- [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
This article describes basic security infrastructure that data movement services in Azure Data Factory use to help secure your data. Data Factory management resources are built on Azure security infrastructure and use all possible security measures offered by Azure.
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-programmatically.md
For pipeline run properties, refer to [PipelineRun API reference](/rest/api/data
* Succeeded * Failed * Canceling
-* Canceled
+* Cancelled
## .NET For a complete walk-through of creating and monitoring a pipeline using .NET SDK, see [Create a data factory and pipeline using .NET](quickstart-create-data-factory-dot-net.md).
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-bulk-copy-portal.md
The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
1. Click the **Pre-copy Script** input box -> select the **Add dynamic content** below -> enter the following expression as script -> select **Finish**. ```sql
- IF EXISTS (SELECT * FROM [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}) TRUNCATE TABLE [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}]
+ IF EXISTS (SELECT * FROM [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}]) TRUNCATE TABLE [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}]
``` ![Copy sink settings](./media/tutorial-bulk-copy-portal/copy-sink-settings.png)
databox-gateway Data Box Gateway Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-gateway/data-box-gateway-deploy-add-shares.md
Previously updated : 10/14/2020 Last updated : 07/06/2021 #Customer intent: As an IT admin, I need to understand how to add and connect to shares on Data Box Gateway so I can use it to transfer data to Azure.
In this tutorial, you learned about Data Box Gateway topics such as:
Advance to the next tutorial to learn how to administer your Data Box Gateway. > [!div class="nextstepaction"]
-> [Use local web UI to administer a Data Box Gateway](https://aka.ms/dbg-docs)
+> [Use local web UI to administer a Data Box Gateway](data-box-gateway-manage-access-power-connectivity-mode.md)
databox-online Azure Stack Edge Gpu Certificate Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-certificate-requirements.md
Certificate issuing requirements are as follows:
## Certificate algorithms
-Certificate algorithms must have the following requirements:
+Only the RivestΓÇôShamirΓÇôAdleman (RSA) certificates are supported with your device. Elliptic Curve Digital Signature Algorithm (ECDSA) certificates are not supported.
+
+Certificates that contain an RSA public key are referred to as RSA certificates. Certificates that contain an Elliptic Curve Cryptographic (ECC) public key are referred to as ECDSA (Elliptic Curve Digital Signature Algorithm) certificates.
+
+Certificate algorithm requirements are as follows:
* Certificates must use the RSA key algorithm.
Certificate algorithms must have the following requirements:
## Certificate subject name and subject alternative name
-Certificates must have the following subject name and subject alternative name requirements:
+Certificates must meet the following subject name and subject alternative name requirements:
-* You can either use a single certificate covering all name spaces in the certificate's Subject Alternative Name (SAN) fields. Alternatively, you can use individual certificates for each of the namespaces. Both approaches require using wild cards for endpoints where required, such as binary large object (Blob).
+* You can either use a single certificate covering all namespaces in the certificate's Subject Alternative Name (SAN) fields. Alternatively, you can use individual certificates for each of the namespaces. Both approaches require using wild cards for endpoints where required, such as binary large object (Blob).
* Ensure that the subject names (common name in the subject name) is part of subject alternative names in the subject alternative name extension.
The PFX certificates installed on your Azure Stack Edge Pro device should meet t
* Use only RSA certificates with the Microsoft RSA/Schannel Cryptographic provider.
-For more information, see [Export PFX certificates with private key](azure-stack-edge-gpu-manage-certificates.md#export-certificates-as-pfx-format-with-private-key).
+For more information, see [Export PFX certificates with private key](azure-stack-edge-gpu-prepare-certificates-device-upload.md#export-certificates-as-pfx-format-with-private-key).
## Next steps
-[Use certificates with Azure Stack Edge Pro](azure-stack-edge-gpu-manage-certificates.md)
-
-[Create certificates for your Azure Stack Edge Pro using Azure Stack Hub Readiness Checker tool](azure-stack-edge-gpu-create-certificates-tool.md)
+- Create certificates for your device
-[Export PFX certificates with private key](azure-stack-edge-gpu-manage-certificates.md#export-certificates-as-pfx-format-with-private-key)
+ - Via [Azure PowerShell cmdlets](azure-stack-edge-gpu-create-certificates-powershell.md)
+ - Via [Azure Stack Hub Readiness Checker tool](azure-stack-edge-gpu-create-certificates-tool.md).
-[Troubleshooting certificate errors](azure-stack-edge-gpu-certificate-troubleshooting.md)
databox-online Azure Stack Edge Gpu Certificate Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-certificate-troubleshooting.md
Previously updated : 02/22/2021 Last updated : 06/01/2021
The following table shows common certificate errors and detailed information abo
## Next steps
-[Certificate requirements](azure-stack-edge-gpu-certificate-requirements.md)
+- Review [Certificate requirements](azure-stack-edge-gpu-certificate-requirements.md).
+- [Troubleshoot using device logs, diagnostic tests](azure-stack-edge-gpu-troubleshoot.md).
databox-online Azure Stack Edge Gpu Certificates Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-certificates-overview.md
+
+ Title: Azure Stack Edge Pro GPU, Pro R, Mini R certificate overview
+description: Describes an overview of all the certificates that can be used on Azure Stack Edge Pro GPU device.
++++++ Last updated : 06/30/2021++
+# What are certificates on Azure Stack Edge Pro GPU?
++
+This article describes the types of certificates that can be installed on your Azure Stack Edge Pro GPU device. The article also includes the details for each certificate type.
+
+## About certificates
+
+A certificate provides a link between a **public key** and an entity (such as domain name) that has been **signed** (verified) by a trusted third party (such as a **certificate authority**). A certificate provides a convenient way of distributing trusted public encryption keys. Certificates thereby ensure that your communication is trusted and that you're sending encrypted information to the right server.
+
+## Deploying certificates on device
+
+On your Azure Stack Edge device, you can use the self-signed certificates or bring your own certificates.
+
+- **Device-generated certificates**: When your device is initially configured, self-signed certificates are automatically generated. If needed, you can regenerate these certificates via the local web UI. Once the certificates are regenerated, download and import the certificates on the clients used to access your device.
+
+- **Bring your own certificates**: Optionally, you can bring your own certificates. There are guidelines that you need to follow if you plan to bring your own certificates.
+
+- Start by understanding the types of the certificates that can be used with your Azure Stack Edge device in this article.
+- Next, review the [Certificate requirements for each type of certificate](azure-stack-edge-gpu-certificate-requirements.md).
+- You can then [Create your certificates via Azure PowerShell](azure-stack-edge-gpu-create-certificates-powershell.md) or [Create your certificates via Readiness Checker tool](azure-stack-edge-gpu-create-certificates-tool.md).
+- Finally, [Convert the certificates to appropriate format](azure-stack-edge-gpu-prepare-certificates-device-upload.md) so that they are ready to upload on to your device.
+- [Upload your certificates](azure-stack-edge-gpu-manage-certificates.md#upload-certificates-on-your-device) on the device.
+- [Import the certificates on the clients](azure-stack-edge-gpu-manage-certificates.md#import-certificates-on-the-client-accessing-the-device) accessing the device.
+
+## Types of certificates
+
+The various types of certificates that you can bring for your device are as follows:
+- Signing certificates
+ - Root CA
+ - Intermediate
+
+- Node certificates
+
+- Endpoint certificates
+
+ - Azure Resource Manager certificates
+ - Blob storage certificates
+
+- Local UI certificates
+- IoT device certificates
+
+- Kubernetes certificates
+
+- Wi-Fi certificates
+- VPN certificates
+
+- Encryption certificates
+ - Support session certificates
+
+Each of these certificates are described in detail in the following sections.
+
+## Signing chain certificates
+
+These are the certificates for the authority that signs the certificates or the signing certificate authority.
+
+### Types
+
+These certificates could be root certificates or the intermediate certificates. The root certificates are always self-signed (or signed by itself). The intermediate certificates are not self-signed and are signed by the signing authority.
+
+#### Caveats
+
+- The root certificates should be signing chain certificates.
+- The root certificates can be uploaded on your device in the following format:
+ - **DER** ΓÇô These are available as a `.cer` file extension.
+ - **Base-64 encoded** ΓÇô These are available as `.cer` file extension.
+ - **P7b** ΓÇô This format is used only for signing chain certificates that includes the root and intermediate certificates.
+- Signing chain certificates are always uploaded before you upload any other certificates.
++
+## Node certificates
+
+<!--Your device could be a 1-node device or a 4-node device.--> All the nodes in your device are constantly communicating with each other and therefore need to have a trust relationship. Node certificates provide a way to establish that trust. Node certificates also come into play when you are connecting to the device node using a remote PowerShell session over https.
+
+#### Caveats
+
+- The node certificate should be provided in `.pfx` format with a private key that can be exported.
+- You can create and upload 1 wildcard node certificate or 4 individual node certificates.
+- A node certificate must be changed if the DNS domain changes but the device name does not change. If you are bringing your own node certificate, then you can't change the device serial number, you can only change the domain name.
+- Use the following table to guide you when creating a node certificate.
+
+ |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
+ |||||
+ |Node|`<NodeSerialNo>.<DnsDomain>`|`*.<DnsDomain>`<br><br>`<NodeSerialNo>.<DnsDomain>`|`mydevice1.microsoftdatabox.com` |
+
+
+## Endpoint certificates
+
+For any endpoints that the device exposes, a certificate is required for trusted communication. The endpoint certificates include those required when accessing the Azure Resource Manager and the blob storage via the REST APIs.
+
+When you bring in a signed certificate of your own, you also need the corresponding signing chain of the certificate. For the signing chain, Azure Resource Manager, and the blob certificates on the device, you will need the corresponding certificates on the client machine also to authenticate and communicate with the device.
+
+#### Caveats
+
+- The endpoint certificates need to be in `.pfx` format with a private key. Signing chain should be DER format (`.cer` file extension).
+- When you bring your own endpoint certificates, these can be as individual certificates or multidomain certificates.
+- If you are bringing in signing chain, the signing chain certificate must be uploaded before you upload an endpoint certificate.
+- These certificates must be changed if the device name or the DNS domain names change.
+- A wildcard endpoint certificate can be used.
+- The properties of the endpoint certificates are similar to those of a typical SSL certificate.
+- Use the following table when creating an endpoint certificate:
+
+ |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
+ |||||
+ |Azure Resource Manager|`management.<Device name>.<Dns Domain>`|`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`|`management.mydevice1.microsoftdatabox.com` |
+ |Blob storage|`*.blob.<Device name>.<Dns Domain>`|`*.blob.< Device name>.<Dns Domain>`|`*.blob.mydevice1.microsoftdatabox.com` |
+ |Multi-SAN single certificate for both endpoints|`<Device name>.<dnsdomain>`|`<Device name>.<dnsdomain>`<br>`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`<br>`*.blob.<Device name>.<Dns Domain>`|`mydevice1.microsoftdatabox.com` |
++
+## Local UI certificates
+
+You can access the local web UI of your device via a browser. To ensure that this communication is secure, you can upload your own certificate.
+
+#### Caveats
+
+- The local UI certificate is also uploaded in a `.pfx` format with a private key that can be exported.
+- After you upload the local UI certificate, you will need to restart the browser and clear the cache. Refer to the specific instructions for your browser.
+
+ |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
+ |||||
+ |Local UI| `<Device name>.<DnsDomain>`|`<Device name>.<DnsDomain>`| `mydevice1.microsoftdatabox.com` |
+
+
+## IoT Edge device certificates
+
+Your device is also an IoT device with the compute enabled by an IoT Edge device connected to it. For any secure communication between this IoT Edge device and the downstream devices that may connect to it, you can also upload IoT Edge certificates.
+
+The device has self-signed certificates that can be used if you want to use only the compute scenario with the device. If the device is however connected to downstream devices, then you'll need to bring your own certificates.
+
+There are three IoT Edge certificates that you need to install to enable this trust relation:
+
+- **Root certificate authority or the owner certificate authority**
+- **Device certificate authority**
+- **Device key certificate**
++
+#### Caveats
+
+- The IoT Edge certificates are uploaded in `.pem` format.
+
+For more information on IoT Edge certificates, see [Azure IoT Edge certificate details](../iot-edge/iot-edge-certs.md#iot-edge-certificates) and [Create IoT Edge production certificates](/azure/iot-edge/how-to-manage-device-certificates?view=iotedge-2020-11&preserve-view=true#create-production-certificates).
+
+## Kubernetes certificates
+
+If your device has an Edge container registry, then you'll need an Edge Container Registry certificate for secure communication with the client that is accessing the registry on the device.
+
+#### Caveats
+
+- The Edge Container Registry certificate must be uploaded as *.pfx* format with a private key.
+
+## VPN certificates
+
+If VPN (Point-to-site) is configured on your device, you can bring your own VPN certificate to ensure the communication is trusted. The root certificate is installed on the Azure VPN Gateway and the client certificates are installed on each client computer that connects to a VNet using Point-to-Site.
+
+#### Caveats
+
+- The VPN certificate must be uploaded as a *.pfx* format with a private key.
+- The VPN certificate is not dependant on the device name, device serial number, or device configuration. It only requires the external FQDN.
+- Make sure that the client OID is set.
+
+For more information, see [Generate and export certificates for Point-to-Site using PowerShell](../vpn-gateway/vpn-gateway-certificates-point-to-site.md#generate-and-export-certificates-for-point-to-site-using-powershell).
+
+## Wi-Fi certificates
+
+If your device is configured to operate on a WPA2-Enterprise wireless network, then you will also need a Wi-Fi certificate for any communication that occurs over the wireless network.
+
+#### Caveats
+
+- The Wi-Fi certificate must be uploaded as a *.pfx* format with a private key.
+- Make sure that the client OID is set.
+
+## Support session certificates
+
+If your device is experiencing any issues, then to troubleshoot those issues, a remote PowerShell Support session may be opened on the device. To enable a secure, encrypted communication over this Support session, you can upload a certificate.
+
+#### Caveats
+
+- Make sure that the corresponding `.pfx` certificate with private key is installed on the client machine using the decryption tool.
+- Verify that the **Key Usage** field for the certificate is not **Certificate Signing**. To verify this, right-click the certificate, choose **Open** and in the **Details** tab, find **Key Usage**.
+
+- The Support session certificate must be provided as DER format with a `.cer` extension.
++
+## Next steps
+
+[Review certificate requirements](azure-stack-edge-gpu-certificate-requirements.md).
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
To connect to Azure Resource Manager, you will need to create or get signing cha
For test and development use only, you can use Windows PowerShell to create certificates on your local system. While creating the certificates for the client, follow these guidelines:
-1. You first need to create a root certificate for the signing chain. For more information, see See steps to [Create signing chain certificates](azure-stack-edge-gpu-manage-certificates.md#create-signing-chain-certificate).
+1. You first need to create a root certificate for the signing chain. For more information, see See steps to [Create signing chain certificates](azure-stack-edge-gpu-create-certificates-powershell.md#create-signing-chain-certificate).
-2. You can next create the endpoint certificates for Azure Resource Manager and blob (optional). You can get these endpoints from the **Device** page in the local web UI. See the steps to [Create endpoint certificates](azure-stack-edge-gpu-manage-certificates.md#create-signed-endpoint-certificates).
+2. You can next create the endpoint certificates for Azure Resource Manager and blob (optional). You can get these endpoints from the **Device** page in the local web UI. See the steps to [Create endpoint certificates](azure-stack-edge-gpu-create-certificates-powershell.md#create-signed-endpoint-certificates).
3. For all these certificates, make sure that the subject name and subject alternate name conform to the following guidelines:
For test and development use only, you can use Windows PowerShell to create cert
\* Blob storage is not required to connect to Azure Resource Manager. It is listed here in case you are creating local storage accounts on your device.
-For more information on certificates, go to how to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
+For more information on certificates, go to how to [Upload certificates on your device and import certificates on the clients accessing your device](azure-stack-edge-gpu-manage-certificates.md).
### Upload certificates on the device The certificates that you created in the previous step will be in the Personal store on your client. These certificates need to be exported on your client into appropriate format files that can then be uploaded to your device.
-1. The root certificate must be exported as a DER format file with *.cer* file extension. For detailed steps, see [Export certificates as a .cer format file](azure-stack-edge-gpu-manage-certificates.md#export-certificates-as-der-format).
+1. The root certificate must be exported as a DER format file with *.cer* file extension. For detailed steps, see [Export certificates as a .cer format file](azure-stack-edge-gpu-prepare-certificates-device-upload.md#export-certificates-as-der-format).
-2. The endpoint certificates must be exported as *.pfx* files with private keys. For detailed steps, see [Export certificates as .pfx file with private keys](azure-stack-edge-gpu-manage-certificates.md#export-certificates-as-pfx-format-with-private-key).
+2. The endpoint certificates must be exported as *.pfx* files with private keys. For detailed steps, see [Export certificates as .pfx file with private keys](azure-stack-edge-gpu-prepare-certificates-device-upload.md#export-certificates-as-pfx-format-with-private-key).
3. The root and endpoint certificates are then uploaded on the device using the **+Add certificate** option on the **Certificates** page in the local web UI. To upload the certificates, follow the steps in [Upload certificates](azure-stack-edge-gpu-manage-certificates.md#upload-certificates).
databox-online Azure Stack Edge Gpu Create Certificates Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-certificates-powershell.md
+
+ Title: Create certificates for Azure Stack Edge Pro GPU via Azure PowerShell | Microsoft Docs
+description: Describes how to create certificates for Azure Stack Edge Pro GPU device using Azure PowerShell cmdlets.
++++++ Last updated : 06/01/2021++
+# Use certificates with Azure Stack Edge Pro GPU device
++
+This article describes the procedure to create your own certificates using the Azure PowerShell cmdlets. The article includes the guidelines that you need to follow if you plan to bring your own certificates on Azure Stack Edge device.
+
+Certificates ensure that the communication between your device and clients accessing it is trusted and that you're sending encrypted information to the right server. When your Azure Stack Edge device is initially configured, self-signed certificates are automatically generated. Optionally, you can bring your own certificates.
+
+You can use one of the following methods to create your own certificates for the device:
+
+ - Use the Azure PowerShell cmdlets.
+ - Use the Azure Stack Hub Readiness Checker tool to create certificate signing requests (CSRs) that would help your certificate authority issue you certificates.
+
+This article only covers how to create your own certificates using the Azure PowerShell cmdlets.
+
+## Prerequisites
+
+Before you bring your own certificates, make sure that:
+
+- You are familiar with the [Types of the certificates that can be used with your Azure Stack Edge device](azure-stack-edge-gpu-certificates-overview.md).
+- You have reviewed the [Certificate requirements for each type of certificate](azure-stack-edge-gpu-certificate-requirements.md).
++
+## Create certificates
+
+The following section describes the procedure to create signing chain and endpoint certificates.
++
+### Certificate workflow
+
+You will have a defined way to create the certificates for the devices operating in your environment. You can use the certificates provided to you by your IT administrator.
+
+**For development or test purposes only, you can also use Windows PowerShell to create certificates on your local system.** While creating the certificates for the client, follow these guidelines:
+
+1. You can create any of the following types of certificates:
+
+ - Create a single certificate valid for use with a single fully qualified domain name (FQDN). For example, *mydomain.com*.
+ - Create a wildcard certificate to secure the main domain name and multiple sub domains as well. For example, **.mydomain.com*.
+ - Create a subject alternative name (SAN) certificate that will cover multiple domain names in a single certificate.
+
+2. If you are bringing your own certificate, you will need a root certificate for the signing chain. See steps to [Create signing chain certificates](#create-signing-chain-certificate).
+
+3. You can next create the endpoint certificates for the local UI of the appliance, blob, and Azure Resource Manager. You can create 3 separate certificates for the appliance, blob, and Azure Resource Manager, or you can create one certificate for all the 3 endpoints. For detailed steps, see [Create signing and endpoint certificates](#create-signed-endpoint-certificates).
+
+4. Whether you are creating 3 separate certificates or one certificate, specify the subject names (SN) and subject alternative names (SAN) as per the guidance provided for each certificate type.
+
+### Create signing chain certificate
+
+Create these certificates via Windows PowerShell running in administrator mode. **The certificates created this way should be used for development or test purposes only.**
+
+The signing chain certificate needs to be created only once. The other end point certificates will refer to this certificate for signing.
+
+
+```azurepowershell
+$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature -Subject "CN=RootCert" -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -KeyUsageProperty Sign -KeyUsage CertSign
+```
++
+### Create signed endpoint certificates
+
+Create these certificates via Windows PowerShell running in administrator mode.
+
+In these examples, endpoints certificates are created for a device with:
+ - Device name: `DBE-HWDC1T2`
+ - DNS domain: `microsoftdatabox.com`
+
+Replace with the name and DNS domain for your device to create certificates for your device.
+
+**Blob endpoint certificate**
+
+Create a certificate for the Blob endpoint in your personal store.
+
+```azurepowershell
+$AppName = "DBE-HWDC1T2"
+$domain = "microsoftdatabox.com"
+
+New-SelfSignedCertificate -Type Custom -DnsName "*.blob.$AppName.$domain" -Subject "CN=*.blob.$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
+```
+
+**Azure Resource Manager endpoint certificate**
+
+Create a certificate for the Azure Resource Manager endpoints in your personal store.
+
+```azurepowershell
+$AppName = "DBE-HWDC1T2"
+$domain = "microsoftdatabox.com"
+
+New-SelfSignedCertificate -Type Custom -DnsName "management.$AppName.$domain","login.$AppName.$domain" -Subject "CN=management.$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
+```
+
+**Device local web UI certificate**
+
+Create a certificate for the local web UI of the device in your personal store.
+
+```azurepowershell
+$AppName = "DBE-HWDC1T2"
+$domain = "microsoftdatabox.com"
+
+New-SelfSignedCertificate -Type Custom -DnsName "$AppName.$domain" -Subject "CN=$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
+```
+
+**Single multi-SAN certificate for all endpoints**
+
+Create a single certificate for all the endpoints in your personal store.
+
+```azurepowershell
+$AppName = "DBE-HWDC1T2"
+$domain = "microsoftdatabox.com"
+$DeviceSerial = "HWDC1T2"
+
+New-SelfSignedCertificate -Type Custom -DnsName "$AppName.$domain","$DeviceSerial.$domain","management.$AppName.$domain","login.$AppName.$domain","*.blob.$AppName.$domain" -Subject "CN=$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
+```
+
+Once the certificates are created, the next step is to upload the certificates on your Azure Stack Edge Pro GPU device.
+
+## Next steps
+
+[Upload certificates on your device](azure-stack-edge-gpu-manage-certificates.md).
databox-online Azure Stack Edge Gpu Create Certificates Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-certificates-tool.md
Title: Create certificates using Microsoft Azure Stack Hub Readiness Checker tool | Microsoft Docs
+ Title: Create certificates for Azure Stack Edge Pro GPU via Azure Stack Hub Readiness Checker tool
description: Describes how to create certificate requests and then get and install certificates on your Azure Stack Edge Pro GPU device using the Azure Stack Hub Readiness Checker tool.
Previously updated : 02/22/2021 Last updated : 06/01/2021
-# Create certificates for your Azure Stack Edge Pro using Azure Stack Hub Readiness Checker tool
+# Create certificates for your Azure Stack Edge Pro GPU using Azure Stack Hub Readiness Checker tool
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
Use these steps to prepare the Azure Stack Edge Pro device certificates:
You'll also see an INF folder. This contains a management.<edge-devicename> information file in clear text explaining the certificate details.
-6. Submit these files to your certificate authority (either internal or public). Be sure that your CA generates certificates, using your generated request, that meet the Azure Stack Edge Pro certificate requirements for [node certificates](azure-stack-edge-gpu-manage-certificates.md#node-certificates), [endpoint certificates](azure-stack-edge-gpu-manage-certificates.md#endpoint-certificates), and [local UI certificates](azure-stack-edge-gpu-manage-certificates.md#local-ui-certificates).
+6. Submit these files to your certificate authority (either internal or public). Be sure that your CA generates certificates, using your generated request, that meet the Azure Stack Edge Pro certificate requirements for [node certificates](azure-stack-edge-gpu-certificates-overview.md#node-certificates), [endpoint certificates](azure-stack-edge-gpu-certificates-overview.md#endpoint-certificates), and [local UI certificates](azure-stack-edge-gpu-certificates-overview.md#local-ui-certificates).
## Prepare certificates for deployment
First, you'll generate a proper folder structure and place the certificates in t
## Next steps
-[Deploy your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-prep.md)
+[Upload certificates on your device](azure-stack-edge-gpu-manage-certificates.md).
databox-online Azure Stack Edge Gpu Deploy Add Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-add-storage-accounts.md
Each of these steps is described in the following sections.
Accessing Blob storage over HTTPS requires an SSL certificate for the device. You will also upload this certificate to your Azure Stack Edge Pro device as *.pfx* file with a private key attached to it. For more information on how to create (for test and dev purposes only) and upload these certificates to your Azure Stack Edge Pro device, go to: -- [Create the blob endpoint certificate](azure-stack-edge-gpu-manage-certificates.md#create-certificates-optional).
+- [Create the blob endpoint certificate](azure-stack-edge-gpu-create-certificates-powershell.md#create-certificates).
- [Upload the blob endpoint certificate](azure-stack-edge-gpu-manage-certificates.md#upload-certificates). - [Import certificates on the client accessing the device](azure-stack-edge-gpu-manage-certificates.md#import-certificates-on-the-client-accessing-the-device).
databox-online Azure Stack Edge Gpu Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md
Previously updated : 09/10/2020 Last updated : 06/30/2020
-# Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro so I can use it to transfer data to Azure.
+# Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro GPU so I can use it to transfer data to Azure.
# Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU
In this tutorial, you learn about:
Before you configure and set up your Azure Stack Edge Pro device with GPU, make sure that:
-* You've installed the physical device as detailed in [Install Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-install.md).
+* You've installed the physical device as detailed in [Install Azure Stack Edge Pro GPU](azure-stack-edge-gpu-deploy-install.md).
* If you plan to bring your own certificates: - You should have your certificates ready in the appropriate format including the signing chain certificate. For details on certificate, go to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md)
-<!-- - If your device is deployed in Azure Government or Azure Government Secret or Azure Government top secret cloud and not deployed in Azure public cloud, a signing chain certificate is required before you can activate your device.
- For details on certificate, go to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).-->
+ - If your device is deployed in Azure Government and not deployed in Azure public cloud, a signing chain certificate is required before you can activate your device.
+ For details on certificate, go to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
## Configure certificates for device
Before you configure and set up your Azure Stack Edge Pro device with GPU, make
This is because the certificates do not reflect the updated device name and DNS domain (that are used in subject name and subject alternative). To successfully activate your device, choose one of the following options:
- - **Generate all the device certificates**. These device certificates should only be used for testing and not used with production workloads. For more information, go to [Generate device certificates on your Azure Stack Edge Pro](#generate-device-certificates).
+ - **Generate all the device certificates**. These device certificates should only be used for testing and not used with production workloads. For more information, go to [Generate device certificates on your Azure Stack Edge Pro GPU](#generate-device-certificates).
- - **Bring your own certificates**. You can bring your own signed endpoint certificates and the corresponding signing chains. You first add the signing chain and then upload the endpoint certificates. **We recommend that you always bring your own certificates for production workloads.** For more information, go to [Bring your own certificates on your Azure Stack Edge Pro device](#bring-your-own-certificates).
+ - **Bring your own certificates**. You can bring your own signed endpoint certificates and the corresponding signing chains. You first add the signing chain and then upload the endpoint certificates. **We recommend that you always bring your own certificates for production workloads.** For more information, go to [Bring your own certificates on your Azure Stack Edge Pro GPU device](#bring-your-own-certificates).
- You can bring some of your own certificates and generate some device certificates. The **Generate certificates** option will only regenerate the device certificates.
Before you configure and set up your Azure Stack Edge Pro device with GPU, make
Follow these steps to generate device certificates.
-Use these steps to regenerate and download the Azure Stack Edge Pro device certificates:
+Use these steps to regenerate and download the Azure Stack Edge Pro GPU device certificates:
1. In the local UI of your device, go to **Configuration > Certificates**. Select **Generate certificates**.
Use these steps to regenerate and download the Azure Stack Edge Pro device certi
`<Device name>_<Endpoint name>.cer`. These certificates contain the public key for the corresponding certificates installed on the device.
-You will need to install these certificates on the client system that you are using to access the endpoints on the ASE device. These certificates establish trust between the client and the device.
+You will need to install these certificates on the client system that you are using to access the endpoints on the Azure Stack Edge device. These certificates establish trust between the client and the device.
-To import and install these certificates on the client that you are using to access the device, follow the steps in [Import certificates on the clients accessing your Azure Stack Edge Pro device](azure-stack-edge-gpu-manage-certificates.md#import-certificates-on-the-client-accessing-the-device).
+To import and install these certificates on the client that you are using to access the device, follow the steps in [Import certificates on the clients accessing your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-manage-certificates.md#import-certificates-on-the-client-accessing-the-device).
If using Azure Storage Explorer, you will need to install certificates on your client in PEM format and you will need to convert the device generated certificates into PEM format.
If using Azure Storage Explorer, you will need to install certificates on your c
### Bring your own certificates
-Follow these steps to add your own certificates including the signing chain.
+You can bring your own certificates.
+
+- Start by understanding the [Types of certificates that can be used with your Azure Stack Edge device](azure-stack-edge-gpu-certificates-overview.md).
+- Next, review the [Certificate requirements for each type of certificate](azure-stack-edge-gpu-certificate-requirements.md).
+- You can then [Create your certificates via Azure PowerShell](azure-stack-edge-gpu-create-certificates-powershell.md) or [Create your certificates via Readiness Checker tool](azure-stack-edge-gpu-create-certificates-tool.md).
+- Finally, [Convert the certificates to appropriate format](azure-stack-edge-gpu-prepare-certificates-device-upload.md) so that they are ready to upload on to your device.
+
+Follow these steps to upload your own certificates including the signing chain.
1. To upload certificate, on the **Certificate** page, select **+ Add certificate**.
In this tutorial, you learn about:
> * Prerequisites > * Configure certificates for the physical device
-To learn how to activate your Azure Stack Edge Pro device, see:
+To learn how to activate your Azure Stack Edge Pro GPU device, see:
> [!div class="nextstepaction"]
-> [Activate Azure Stack Edge Pro device](./azure-stack-edge-gpu-deploy-activate.md)
+> [Activate Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-deploy-activate.md)
databox-online Azure Stack Edge Gpu Manage Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-certificates.md
Previously updated : 03/08/2021 Last updated : 06/01/2021
-# Use certificates with Azure Stack Edge Pro GPU device
+# Upload, import, and export certificates on Azure Stack Edge Pro GPU
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes the types of certificates that can be installed on your Azure Stack Edge Pro device. The article also includes the details for each certificate type along with the procedure to install and identify the expiration date.
+To ensure secure and trusted communication between your Azure Stack Edge device and the clients connecting to it, you can use self-signed certificates or bring your own certificates. This article describes how to manage these certificates including how to upload, import, export these certificates or view their expiration date.
-## About certificates
+To know more about how to create these certificates, see [Create certificates using Azure PowerShell](azure-stack-edge-gpu-create-certificates-powershell.md).
-A certificate provides a link between a **public key** and an entity (such as domain name) that has been **signed** (verified) by a trusted third party (such as a **certificate authority**). A certificate provides a convenient way of distributing trusted public encryption keys. Certificates thereby ensure that your communication is trusted and that you're sending encrypted information to the right server.
-When your Azure Stack Edge Pro device is initially configured, self-signed certificates are automatically generated. Optionally, you can bring your own certificates. There are guidelines that you need to follow if you plan to bring your own certificates.
+## Upload certificates on your device
-## Types of certificates
+If you bring your own certificates, then the certificates that you created for your device by default reside in the **Personal store** on your client. These certificates need to be exported on your client into appropriate format files that can then be uploaded to your device.
-The various types of certificates that are used on your Azure Stack Edge Pro device are as follows:
-- Signing certificates
- - Root CA
- - Intermediate
-- Endpoint certificates
- - Node certificate
- - Local UI certificates
- - Azure Resource Manager certificates
- - Blob storage certificates
- - IoT device certificates
- <! WiFi certificates
- - VPN certificates-->
+### Prerequisites
-- Encryption certificates
- - Support session certificates
+Before you upload your root certificates and endpoint certificates on to the device, make sure the certificates are exported in appropriate format.
-Each of these certificates are described in detail in the following sections.
+- The root certificate must be exported as DER format with `.cer` extension. For detailed steps, see [Export certificates as DER format](azure-stack-edge-gpu-prepare-certificates-device-upload.md#export-certificates-as-der-format).
+- The endpoint certificates must be exported as *.pfx* files with private keys. For detailed steps, see [Export certificates as *.pfx* file with private keys](azure-stack-edge-gpu-prepare-certificates-device-upload.md#export-certificates-as-pfx-format-with-private-key).
-## Signing chain certificates
+### Upload certificates
-These are the certificates for the authority that signs the certificates or the signing certificate authority.
+To upload the root and endpoint certificates on the device, use the **+ Add certificate** option on the **Certificates** page in the local web UI. Follow these steps:
-### Types
+1. Upload the root certificates first. In the local web UI, go to **Certificates > + Add certificate**.
-These certificates could be root certificates or the intermediate certificates. The root certificates are always self-signed (or signed by itself). The intermediate certificates are not self-signed and are signed by the signing authority.
+ ![Add signing chain certificate 1](media/azure-stack-edge-series-manage-certificates/add-cert-1.png)
-### Caveats
+2. Next upload the endpoint certificates.
-- The root certificates should be signing chain certificates.-- The root certificates can be uploaded on your device in the following format:
- - **DER** ΓÇô These are available as a `.cer` file extension.
- - **Base-64 encoded** ΓÇô These are available as `.cer` file extension.
- - **P7b** ΓÇô This format is used only for signing chain certificates that includes the root and intermediate certificates.
-- Signing chain certificates are always uploaded before you upload any other certificates.
+ ![Add signing chain certificate 2](media/azure-stack-edge-series-manage-certificates/add-cert-2.png)
+ Choose the certificate files in *.pfx* format and enter the password you supplied when you exported the certificate. The Azure Resource Manager certificate may take a few minutes to apply.
-## Node certificates
+ If the signing chain is not updated first, and you try to upload the endpoint certificates, then you will get an error.
-<!--Your Azure Stack Edge Pro device could be a 1-node device or a 4-node device.--> All the nodes in the device are constantly communicating with each other and therefore need to have a trust relationship. Node certificates provide a way to establish that trust. Node certificates also come into play when you are connecting to the device node using a remote PowerShell session over https.
+ ![Apply certificate error](media/azure-stack-edge-series-manage-certificates/apply-cert-error-1.png)
-### Caveats
--- The node certificate should be provided in `.pfx` format with a private key that can be exported. -- You can create and upload 1 wildcard node certificate or 4 individual node certificates. -- A node certificate must be changed if the DNS domain changes but the device name does not change. If you are bringing your own node certificate, then you can't change the device serial number, you can only change the domain name.-- Use the following table to guide you when creating a node certificate.
-
- |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
- |||||
- |Node|`<NodeSerialNo>.<DnsDomain>`|`*.<DnsDomain>`<br><br>`<NodeSerialNo>.<DnsDomain>`|`mydevice1.microsoftdatabox.com` |
-
-
-## Endpoint certificates
-
-For any endpoints that the device exposes, a certificate is required for trusted communication. The endpoint certificates include those required when accessing the Azure Resource Manager and the blob storage via the REST APIs.
-
-When you bring in a signed certificate of your own, you also need the corresponding signing chain of the certificate. For the signing chain, Azure Resource Manager, and the blob certificates on the device, you will need the corresponding certificates on the client machine also to authenticate and communicate with the device.
-
-### Caveats
--- The endpoint certificates need to be in `.pfx` format with a private key. Signing chain should be DER format (`.cer` file extension). -- When you bring your own endpoint certificates, these can be as individual certificates or multidomain certificates. -- If you are bringing in signing chain, the signing chain certificate must be uploaded before you upload an endpoint certificate.-- These certificates must be changed if the device name or the DNS domain names change.-- A wildcard endpoint certificate can be used.-- The properties of the endpoint certificates are similar to those of a typical SSL certificate. -- Use the following table when creating an endpoint certificate:-
- |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
- |||||
- |Azure Resource Manager|`management.<Device name>.<Dns Domain>`|`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`|`management.mydevice1.microsoftdatabox.com` |
- |Blob storage|`*.blob.<Device name>.<Dns Domain>`|`*.blob.< Device name>.<Dns Domain>`|`*.blob.mydevice1.microsoftdatabox.com` |
- |Multi-SAN single certificate for both endpoints|`<Device name>.<dnsdomain>`|`<Device name>.<dnsdomain>`<br>`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`<br>`*.blob.<Device name>.<Dns Domain>`|`mydevice1.microsoftdatabox.com` |
--
-## Local UI certificates
-
-You can access the local web UI of your device via a browser. To ensure that this communication is secure, you can upload your own certificate.
-
-### Caveats
--- The local UI certificate is also uploaded in a `.pfx` format with a private key that can be exported.-- After you upload the local UI certificate, you will need to restart the browser and clear the cache. Refer to the specific instructions for your browser.-
- |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
- |||||
- |Local UI| `<Device name>.<DnsDomain>`|`<Device name>.<DnsDomain>`| `mydevice1.microsoftdatabox.com` |
-
-
-## IoT Edge device certificates
-
-Your Azure Stack Edge Pro device is also an IoT device with the compute enabled by an IoT Edge device connected to it. For any secure communication between this IoT Edge device and the downstream devices that may connect to it, you can also upload IoT Edge certificates.
-
-The device has self-signed certificates that can be used if you want to use only the compute scenario with the device. If the Azure Stack Edge Pro device is however connected to downstream devices, then you'll need to bring your own certificates.
-
-There are three IoT Edge certificates that you need to install to enable this trust relation:
--- **Root certificate authority or the owner certificate authority**-- **Device certificate authority** -- **Device key certificate**-
-### Caveats
--- The IoT Edge certificates are uploaded in `.pem` format. --
-For more information on IoT Edge certificates, see [Azure IoT Edge certificate details](../iot-edge/iot-edge-certs.md#iot-edge-certificates).
-
-## Support session certificates
-
-If your Azure Stack Edge Pro device is experiencing any issues, then to troubleshoot those issues, a remote PowerShell Support session may be opened on the device. To enable a secure, encrypted communication over this Support session, you can upload a certificate.
-
-### Caveats
--- Make sure that the corresponding `.pfx` certificate with private key is installed on the client machine using the decryption tool.-- Verify that the **Key Usage** field for the certificate is not **Certificate Signing**. To verify this, right-click the certificate, choose **Open** and in the **Details** tab, find **Key Usage**. --
-### Caveats
--- The Support session certificate must be provided as DER format with a `.cer` extension.--
-<!--## VPN certificates
-
-If VPN is configured on your Azure Stack Edge Pro device, then you will also need a certificate for any communication that occurs over the VPN channel. You can bring your own VPN certificate to ensure the communication is trusted.
-
-### Caveats
--- The VPN certificate must be uploaded as a pfx format with a private key.-- The VPN certificate is not dependant on the device name, device serial number, or device configuration. It only requires the external FQDN.-- Make sure that the client OID is set.-->-
-<!--## WiFi certificates
-
-If your device is configured to operate on a wireless network, then you will also need a WiFi certificate for any communication that occurs over the wireless network.
-
-### Caveats
--- The WiFi certificate must be uploaded as a pfx format with a private key.-- Make sure that the client OID is set.-->--
-## Create certificates (optional)
-
-The following section describes the procedure to create signing chain and endpoint certificates.
-
-### Certificate workflow
-
-You will have a defined way to create the certificates for the devices operating in your environment. You can use the certificates provided to you by your IT administrator.
-
-**For development or test purposes only, you can also use Windows PowerShell to create certificates on your local system.** While creating the certificates for the client, follow these guidelines:
-
-1. You can create any of the following types of certificates:
-
- - Create a single certificate valid for use with a single fully qualified domain name (FQDN). For example, *mydomain.com*.
- - Create a wildcard certificate to secure the main domain name and multiple sub domains as well. For example, **.mydomain.com*.
- - Create a subject alternative name (SAN) certificate that will cover multiple domain names in a single certificate.
-
-2. If you are bringing your own certificate, you will need a root certificate for the signing chain. See steps to [Create signing chain certificates](#create-signing-chain-certificate).
-
-3. You can next create the endpoint certificates for the local UI of the appliance, blob, and Azure Resource Manager. You can create 3 separate certificates for the appliance, blob, and Azure Resource Manager, or you can create one certificate for all the 3 endpoints. For detailed steps, see [Create signing and endpoint certificates](#create-signed-endpoint-certificates).
-
-4. Whether you are creating 3 separate certificates or one certificate, specify the subject names (SN) and subject alternative names (SAN) as per the guidance provided for each certificate type.
-
-### Create signing chain certificate
-
-Create these certificates via Windows PowerShell running in administrator mode. **The certificates created this way should be used for development or test purposes only.**
-
-The signing chain certificate needs to be created only once. The other end point certificates will refer to this certificate for signing.
-
-
-```azurepowershell
-$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature -Subject "CN=RootCert" -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -KeyUsageProperty Sign -KeyUsage CertSign
-```
--
-### Create signed endpoint certificates
-
-Create these certificates via Windows PowerShell running in administrator mode.
-
-In these examples, endpoints certificates are created for a device with:
- - Device name: `DBE-HWDC1T2`
- - DNS domain: `microsoftdatabox.com`
-
-Replace with the name and DNS domain for your device to create certificates for your device.
-
-**Blob endpoint certificate**
-
-Create a certificate for the Blob endpoint in your personal store.
-
-```azurepowershell
-$AppName = "DBE-HWDC1T2"
-$domain = "microsoftdatabox.com"
-
-New-SelfSignedCertificate -Type Custom -DnsName "*.blob.$AppName.$domain" -Subject "CN=*.blob.$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
-```
-
-**Azure Resource Manager endpoint certificate**
-
-Create a certificate for the Azure Resource Manager endpoints in your personal store.
-
-```azurepowershell
-$AppName = "DBE-HWDC1T2"
-$domain = "microsoftdatabox.com"
-
-New-SelfSignedCertificate -Type Custom -DnsName "management.$AppName.$domain","login.$AppName.$domain" -Subject "CN=management.$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
-```
-
-**Device local web UI certificate**
-
-Create a certificate for the local web UI of the device in your personal store.
-
-```azurepowershell
-$AppName = "DBE-HWDC1T2"
-$domain = "microsoftdatabox.com"
-
-New-SelfSignedCertificate -Type Custom -DnsName "$AppName.$domain" -Subject "CN=$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
-```
-
-**Single multi-SAN certificate for all endpoints**
-
-Create a single certificate for all the endpoints in your personal store.
-
-```azurepowershell
-$AppName = "DBE-HWDC1T2"
-$domain = "microsoftdatabox.com"
-$DeviceSerial = "HWDC1T2"
-
-New-SelfSignedCertificate -Type Custom -DnsName "$AppName.$domain","$DeviceSerial.$domain","management.$AppName.$domain","login.$AppName.$domain","*.blob.$AppName.$domain" -Subject "CN=$AppName.$domain" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" -Signer $cert -KeySpec KeyExchange -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
-```
-
-Once the certificates are created, the next step is to upload the certificates on your Azure Stack Edge Pro device
--
-## Upload certificates
-
-The certificates that you created for your device by default reside in the **Personal store** on your client. These certificates need to be exported on your client into appropriate format files that can then be uploaded to your device.
-
-1. The root certificate must be exported as DER format with `.cer` extension. For detailed steps, see [Export certificates as DER format](#export-certificates-as-der-format).
-2. The endpoint certificates must be exported as *.pfx* files with private keys. For detailed steps, see [Export certificates as *.pfx* file with private keys](#export-certificates-as-pfx-format-with-private-key).
-3. The root and endpoint certificates are then uploaded on the device using the **+ Add certificate** option on the Certificates page in the local web UI.
-
- 1. Upload the root certificates first. In the local web UI, go to **Certificates > + Add certificate**.
-
- ![Add signing chain certificate 1](media/azure-stack-edge-series-manage-certificates/add-cert-1.png)
-
- 2. Next upload the endpoint certificates.
-
- ![Add signing chain certificate 2](media/azure-stack-edge-series-manage-certificates/add-cert-2.png)
-
- Choose the certificate files in *.pfx* format and enter the password you supplied when you exported the certificate. The Azure Resource Manager certificate may take a few minutes to apply.
-
- If the signing chain is not updated first, and you try to upload the endpoint certificates, then you will get an error.
-
- ![Apply certificate error](media/azure-stack-edge-series-manage-certificates/apply-cert-error-1.png)
-
- Go back and upload the signing chain certificate and then upload and apply the endpoint certificates.
+ Go back and upload the signing chain certificate and then upload and apply the endpoint certificates.
> [!IMPORTANT] > If the device name or the DNS domain are changed, new certificates must be created. The client certificates and the device certificates should then be updated with the new device name and DNS domain. ## Import certificates on the client accessing the device
-The certificates that you created and uploaded to your device must be imported on your Windows client (accessing the device) into the appropriate certificate store.
+You can use the device-generated certificates or bring your own certificates. When using device-generated certificates, you must download the certificates on your client before you can import those into the appropriate certificate store. See [Download certificates to your client accessing the device](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates).
-1. The root certificate that you exported as the DER should now be imported in the **Trusted Root Certificate Authorities** on your client system. For detailed steps, see [Import certificates into the Trusted Root Certificate Authorities store](#import-certificates-as-der-format).
+In both the cases, the certificates that you created and uploaded to your device must be imported on your Windows client (accessing the device) into the appropriate certificate store.
-2. The endpoint certificates that you exported as the `.pfx` must be exported as DER with `.cer` extension. This `.cer` is then imported in the **Personal certificate store** on your system. For detailed steps, see [Import certificates into the Personal certificate store](#import-certificates-as-der-format).
+- The root certificate that you exported as the DER should now be imported in the **Trusted Root Certificate Authorities** on your client system. For detailed steps, see [Import certificates into the Trusted Root Certificate Authorities store](#import-certificates-as-der-format).
-### Import certificates as DER format
+- The endpoint certificates that you exported as the `.pfx` must be exported as DER with `.cer` extension. This `.cer` is then imported in the **Personal certificate store** on your system. For detailed steps, see [Import certificates into the Personal certificate store](#import-certificates-as-der-format).
+
+### Import certificates as DER format
To import certificates on a Windows client, take the following steps:
To import certificates on a Windows client, take the following steps:
4. Select **Finish**. A message to the effect that the import was successful appears.
-### Export certificates as .pfx format with private key
-
-Take the following steps to export an SSL certificate with private key on a Windows machine.
-
-> [!IMPORTANT]
-> Perform these steps on the same machine that you used to create the certificate.
-
-1. Run *certlm.msc* to launch the local machine certificate store.
-
-1. Double click on the **Personal** folder, and then on **Certificates**.
-
- ![Export certificate 1](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-1.png)
-
-2. Right-click on the certificate you would like to back up and choose **All tasks > Export...**
-
- ![Export certificate 2](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-2.png)
-
-3. Follow the Certificate Export Wizard to back up your certificate to a .pfx file.
-
- ![Export certificate 3](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-3.png)
-
-4. Choose **Yes, export the private key**.
-
- ![Export certificate 4](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-4.png)
-
-5. Choose **Include all certificates in certificate path if possible**, **Export all extended properties** and **Enable certificate privacy**.
-
- > [!IMPORTANT]
- > DO NOT select the **Delete Private Key option if export is successful**.
-
- ![Export certificate 5](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-5.png)
-
-6. Enter a password you will remember. Confirm the password. The password protects the private key.
-
- ![Export certificate 6](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-6.png)
-
-7. Choose to save file on a set location.
-
- ![Export certificate 7](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-7.png)
-
-8. Select **Finish**.
-
- ![Export certificate 8](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-8.png)
-
-9. You receive a message The export was successful. Select **OK**.
-
- ![Export certificate 9](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-9.png)
-
-The .pfx file backup is now saved in the location you selected and is ready to be moved or stored for your safe keeping.
--
-### Export certificates as DER format
-
-1. Run *certlm.msc* to launch the local machine certificate store.
-
-1. In the Personal certificate store, select the root certificate. Right-click and select **All Tasks > Export...**
-
- ![Export certificate DER 1](media/azure-stack-edge-series-manage-certificates/export-cert-cer-1.png)
-
-2. The certificate wizard opens up. Select the format as **DER encoded binary X.509 (.cer)**. Select **Next**.
-
- ![Export certificate DER 2](media/azure-stack-edge-series-manage-certificates/export-cert-cer-2.png)
-
-3. Browse and select the location where you want to export the .cer format file.
-
- ![Export certificate DER 3](media/azure-stack-edge-series-manage-certificates/export-cert-cer-3.png)
-
-4. Select **Finish**.
-
- ![Export certificate DER 4](media/azure-stack-edge-series-manage-certificates/export-cert-cer-4.png)
--
-## Supported certificate algorithms
-
- Only the RivestΓÇôShamirΓÇôAdleman (RSA) certificates are supported with your Azure Stack Edge Pro device. Elliptic Curve Digital Signature Algorithm (ECDSA) certificates are not supported.
-
- Certificates that contain an RSA public key are referred to as RSA certificates. Certificates that contain an Elliptic Curve Cryptographic (ECC) public key are referred to as ECDSA (Elliptic Curve Digital Signature Algorithm) certificates.
- ## View certificate expiry If you bring in your own certificates, the certificates will expire typically in 1 year or 6 months. To view the expiration date on your certificate, go to the **Certificates** page in the local web UI of your device. If you select a specific certificate, you can view the expiration date on your certificate.
-<!--## Rotate certificates
-
-Rotation of certificates is not implemented in this release. You are also not notified of the pending expiration date on your certificate.
-
-View the certificate expiration date on the **Certificates** page in the local web UI of your device. Once the certificate expiration is approaching, create and upload new certificates as per the detailed instructions in [Create and upload certificates]().-->
## Next steps
-[Deploy your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-prep.md)
+Learn how to [Troubleshoot certificate issues](azure-stack-edge-gpu-certificate-troubleshooting.md)
databox-online Azure Stack Edge Gpu Prepare Certificates Device Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-certificates-device-upload.md
+
+ Title: Prepare certificates to upload on your Azure Stack Edge Pro GPU/Pro R/Mini R
+description: Describes how to prepare certificates to upload on Azure Stack Edge Pro GPU/Pro R/Mini R devices.
++++++ Last updated : 06/30/2021++
+# Prepare certificates to upload on your Azure Stack Edge Pro GPU
++
+This article describes how to convert the certificates into appropriate format so these are ready to upload on your Azure Stack Edge device. This procedure is typically required when your bring your own certificates.
+
+To know more about how to create these certificates, see [Create certificates using Azure PowerShell](azure-stack-edge-gpu-create-certificates-powershell.md).
++
+## Prepare certificates
+
+If you bring your own certificates, then the certificates that you created for your device by default reside in the **Personal store** on your client. These certificates need to be exported on your client into appropriate format files that can then be uploaded to your device.
+
+- **Prepare root certificates**: The root certificate must be exported as DER format with `.cer` extension. For detailed steps, see [Export certificates as DER format](#export-certificates-as-der-format).
+
+- **Prepare endpoint certificates**: The endpoint certificates must be exported as *.pfx* files with private keys. For detailed steps, see [Export certificates as *.pfx* file with private keys](#export-certificates-as-pfx-format-with-private-key).
++
+## Export certificates as DER format
+
+1. Run *certlm.msc* to launch the local machine certificate store.
+
+1. In the Personal certificate store, select the root certificate. Right-click and select **All Tasks > Export...**
+
+ ![Export certificate DER 1](media/azure-stack-edge-series-manage-certificates/export-cert-cer-1.png)
+
+2. The certificate wizard opens up. Select the format as **DER encoded binary X.509 (.cer)**. Select **Next**.
+
+ ![Export certificate DER 2](media/azure-stack-edge-series-manage-certificates/export-cert-cer-2.png)
+
+3. Browse and select the location where you want to export the .cer format file.
+
+ ![Export certificate DER 3](media/azure-stack-edge-series-manage-certificates/export-cert-cer-3.png)
+
+4. Select **Finish**.
+
+ ![Export certificate DER 4](media/azure-stack-edge-series-manage-certificates/export-cert-cer-4.png)
++
+## Export certificates as .pfx format with private key
+
+Take the following steps to export an SSL certificate with private key on a Windows machine.
+
+> [!IMPORTANT]
+> Perform these steps on the same machine that you used to create the certificate.
+
+1. Run *certlm.msc* to launch the local machine certificate store.
+
+1. Double click on the **Personal** folder, and then on **Certificates**.
+
+ ![Export certificate 1](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-1.png)
+
+2. Right-click on the certificate you would like to back up and choose **All tasks > Export...**
+
+ ![Export certificate 2](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-2.png)
+
+3. Follow the Certificate Export Wizard to back up your certificate to a .pfx file.
+
+ ![Export certificate 3](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-3.png)
+
+4. Choose **Yes, export the private key**.
+
+ ![Export certificate 4](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-4.png)
+
+5. Choose **Include all certificates in certificate path if possible**, **Export all extended properties** and **Enable certificate privacy**.
+
+ > [!IMPORTANT]
+ > DO NOT select the **Delete Private Key option if export is successful**.
+
+ ![Export certificate 5](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-5.png)
+
+6. Enter a password you will remember. Confirm the password. The password protects the private key.
+
+ ![Export certificate 6](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-6.png)
+
+7. Choose to save file on a set location.
+
+ ![Export certificate 7](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-7.png)
+
+8. Select **Finish**.
+
+ ![Export certificate 8](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-8.png)
+
+9. You receive a message The export was successful. Select **OK**.
+
+ ![Export certificate 9](media/azure-stack-edge-series-manage-certificates/export-cert-pfx-9.png)
+
+The .pfx file backup is now saved in the location you selected and is ready to be moved or stored for your safe keeping.
++
+## Next steps
+
+Learn how to [Upload certificates on your device](azure-stack-edge-gpu-manage-certificates.md).
databox-online Azure Stack Edge Pro R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-technical-specifications-compliance.md
Previously updated : 04/12/2021 Last updated : 07/01/2021 # Azure Stack Edge Pro R technical specifications
The Azure Stack Edge Pro R device has the following specifications for compute a
| CPU: usable | 32 vCPUs | | Memory type | Dell Compatible 16 GB RDIMM, 2666 MT/s, Dual rank | | Memory: raw | 256 GB RAM (16 x 16 GB) |
-| Memory: usable | 230 GB RAM |
+| Memory: usable | 217 GB RAM |
## Compute acceleration specifications
defender-for-iot Iot Security Azure Rtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/iot-security-azure-rtos.md
By using the recommended infrastructure Defender for IoT provides, you can gain
## Get started protecting Azure RTOS devices
-Defender-IoT-micro-agent for Azure RTOS is provided as a free download for your devices. The Defender for IoT cloud service is available with a 30-day trial per Azure subscription. To get started, download the [Defender-IoT-micro-agent for Azure RTOS](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/defender-for-iot/iot-security-azure-rtos.md).
+Defender-IoT-micro-agent for Azure RTOS is provided as a free download for your devices. The Defender for IoT cloud service is available with a 30-day trial per Azure subscription. To get started, download the [Defender-IoT-micro-agent for Azure RTOS](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/defender-for-iot/device-builders/iot-security-azure-rtos.md).
## Next steps
defender-for-iot References Horizon Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/references-horizon-sdk.md
Defender for IoT provides basic dissectors for common protocols. You can build y
This kit contains the header files needed for development. The development process requires basic steps and optional advanced steps, described in this SDK.
-Contact <support@cyberx-labs.com> for information on receiving header files and other resources.
+Contact support.microsoft.com for information on receiving header files and other resources.
## About the environment and setup
devtest-labs Devtest Lab Auto Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-auto-shutdown.md
The Notification is sent to the webhook url if a webhook was specified. If an e
We recommend you to use web hooks because they're extensively supported by various apps like Azure Logic Apps and Slack. Webhooks allow you to implement your own way for sending notifications. As an example, this article walks you through how to configure autoshutdown notification to send an email to the VM owner by using Azure Logic Apps. First, let's quickly go through the basic steps to enable autoshutdown notification in your lab.
-### Create a logic app that receives email notifications
+### Create a logic app that sends email notifications
[Azure Logic Apps](../logic-apps/logic-apps-overview.md) provides many connectors that makes it easy to integrate a service with other clients, like Office 365 and Twitter. At the high level, the steps to set up a Logic App for email notification can be divided into four phases:
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
# Azure Digital Twins query plugin for Azure Data Explorer
-The Azure Digital Twins plugin for [Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview) lets you run ADX queries that access and combine data across the Azure Digital Twins graph and ADX time series databases. Use the plugin to contextualize disparate time series data by reasoning across digital twins and their relationships to gain insights into the behavior of modeled environments.
+The Azure Digital Twins plugin for [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) lets you run Azure Data Explorer queries that access and combine data across the Azure Digital Twins graph and Azure Data Explorer time series databases. Use the plugin to contextualize disparate time series data by reasoning across digital twins and their relationships to gain insights into the behavior of modeled environments.
-For example, with this plugin, you can write a KQL query that...
+For example, with this plugin, you can write a Kusto query that...
1. selects digital twins of interest via the Azure Digital Twins query plugin,
-2. joins those twins against the respective times series in ADX, and then
+2. joins those twins against the respective times series in Azure Data Explorer, and then
3. performs advanced time series analytics on those twins.
-Combining data from a twin graph in Azure Digital Twins with time series data in ADX can help you understand the operational behavior of various parts of your solution.
+Combining data from a twin graph in Azure Digital Twins with time series data in Azure Data Explorer can help you understand the operational behavior of various parts of your solution.
## Using the plugin
-In order to get the plugin running on your own ADX cluster that contains time series data, start by running the following command in ADX in order to enable the plugin:
+In order to get the plugin running on your own Azure Data Explorer cluster that contains time series data, start by running the following command in Azure Data Explorer in order to enable the plugin:
```kusto .enable plugin azure_digital_twins_query_request ```
-This command requires **All Databases admin** permission. For more information on the command, see the [.enable plugin documentation](/azure/data-explorer/kusto/management/enable-plugin).
+This command requires **All Databases admin** permission. For more information on the command, see the [`.enable` plugin documentation](/azure/data-explorer/kusto/management/enable-plugin).
-Once the plugin is enabled, you can invoke it within an ADX Kusto query with the following command. There are two placeholders, `<Azure-Digital-Twins-endpoint>` and `<Azure-Digital-Twins-query>`, which are strings representing the Azure Digital Twins instance endpoint and Azure Digital Twins query, respectively.
+Once the plugin is enabled, you can invoke it within a Kusto query with the following command. There are two placeholders, `<Azure-Digital-Twins-endpoint>` and `<Azure-Digital-Twins-query>`, which are strings representing the Azure Digital Twins instance endpoint and Azure Digital Twins query, respectively.
```kusto evaluate azure_digital_twins_query_request(<Azure-Digital-Twins-endpoint>, <Azure-Digital-Twins-query>)
The plugin works by calling the [Azure Digital Twins query API](/rest/api/digita
For more information on using the plugin, see the [Kusto documentation for the azure_digital_twins_query_request plugin](/azure/data-explorer/kusto/query/azure-digital-twins-query-request-plugin).
-To see example queries and complete a walkthrough with sample data, see [Azure Digital Twins query plugin for ADX: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries) in GitHub.
+To see example queries and complete a walkthrough with sample data, see [Azure Digital Twins query plugin for Azure Data Explorer: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries) in GitHub.
-## Using ADX IoT data with Azure Digital Twins
+## Using Azure Data Explorer IoT data with Azure Digital Twins
-There are various ways to ingest IoT data into ADX. Here are two that you might use when using ADX with Azure Digital Twins:
-* Historize digital twin property values to ADX with an Azure function that handles twin change events and writes the twin data to ADX, similar to the process used in [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md). This path will be suitable for customers who use telemetry data to bring their digital twins to life.
-* [Ingest IoT data directly into your ADX cluster from IoT Hub](/azure/data-explorer/ingest-data-iot-hub) or from other sources. Then, the Azure Digital Twins graph will be used to contextualize the time series data using joint Azure Digital Twins/ADX queries. This path may be suitable for direct-ingestion workloads.
+There are various ways to ingest IoT data into Azure Data Explorer. Here are two that you might use when using Azure Data Explorer with Azure Digital Twins:
+* Historize digital twin property values to Azure Data Explorer with an Azure function that handles twin change events and writes the twin data to Azure Data Explorer, similar to the process used in [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md). This path will be suitable for customers who use telemetry data to bring their digital twins to life.
+* [Ingest IoT data directly into your Azure Data Explorer cluster from IoT Hub](/azure/data-explorer/ingest-data-iot-hub) or from other sources. Then, the Azure Digital Twins graph will be used to contextualize the time series data using joint Azure Digital Twins/Azure Data Explorer queries. This path may be suitable for direct-ingestion workloads.
-### Mapping data across ADX and Azure Digital Twins
+### Mapping data across Azure Data Explorer and Azure Digital Twins
-If you're ingesting time series data directly into ADX, you'll likely need to convert this raw time series data into a schema suitable for joint Azure Digital Twins/ADX queries.
+If you're ingesting time series data directly into Azure Data Explorer, you'll likely need to convert this raw time series data into a schema suitable for joint Azure Digital Twins/Azure Data Explorer queries.
-An [update policy](/azure/data-explorer/kusto/management/updatepolicy) in ADX allows you to automatically transform and append data to a target table whenever new data is inserted into a source table.
+An [update policy](/azure/data-explorer/kusto/management/updatepolicy) in Azure Data Explorer allows you to automatically transform and append data to a target table whenever new data is inserted into a source table.
You can use an update policy to enrich your raw time series data with the corresponding **twin ID** from Azure Digital Twins, and persist it to a target table. Using the twin ID, the target table can then be joined against the digital twins selected by the Azure Digital Twins plugin.
-For example, say you created the following table to hold the raw time series data flowing into your ADX instance.
+For example, say you created the following table to hold the raw time series data flowing into your Azure Data Explorer instance.
```kusto .create-merge table rawData (Timestamp:datetime, someId:string, Value:string, ValueType:string) 
For instance, if you want to represent a property with three fields for roll, pi
## Next steps
-* View the plugin documentation for the Kusto language in ADX: [azure_digital_twins_query_request plugin](/azure/data-explorer/kusto/query/azure-digital-twins-query-request-plugin)
+* View the plugin documentation for the Kusto Query Language in Azure Data Explorer: [azure_digital_twins_query_request plugin](/azure/data-explorer/kusto/query/azure-digital-twins-query-request-plugin)
-* View sample queries using the plugin, including a walkthrough that runs the queries in an example scenario: [Azure Digital Twins query plugin for ADX: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries)
+* View sample queries using the plugin, including a walkthrough that runs the queries in an example scenario: [Azure Digital Twins query plugin for Azure Data Explorer: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries)
* Read about another strategy for analyzing historical data in Azure Digital Twins: [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md)
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
The ExpressRoute gateway will advertise the *Address Space(s)* of the Azure VNet
### How many prefixes can be advertised from a VNet to on-premises on ExpressRoute Private Peering?
-There is a maximum of 1000 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 999 address spaces on a single VNet connected to an ExpressRoute circuit, all 999 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 500 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 501 prefixes to on-premises.
+There is a maximum of 1000 IPv4 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 999 address spaces on a single VNet connected to an ExpressRoute circuit, all 999 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 500 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 501 prefixes to on-premises.
+
+If you are using a dual-stack circuit, there is a maximum of 100 IPv6 prefixes on a single ExpressRoute connection, or through VNet peering using gateway transit. This in addition to the limits described above.
### What happens if I exceed the prefix limit on an ExpressRoute connection?
expressroute Expressroute Howto Add Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-ipv6-portal.md
This article describes how to add IPv6 support to connect via ExpressRoute to your resources in Azure using the Azure portal.
-## Register for Public Preview
-Before adding IPv6 support, you must first enroll your subscription. To enroll, run the following commands via Azure PowerShell:
-
-1. Sign into Azure and select the subscription. Run these commands for the subscription containing your ExpressRoute circuit, and the subscription containing your Azure deployments (if they're different).
-
- ```azurepowershell-interactive
- Connect-AzAccount
-
- Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
- ```
-
-1. Register your subscription for Public Preview using the following command:
- ```azurepowershell-interactive
- Register-AzProviderFeature -FeatureName AllowIpv6PrivatePeering -ProviderNamespace Microsoft.Network
- ```
-
-Your request will then be approved by the ExpressRoute team within 2-3 business days.
- ## Sign in to the Azure portal From a browser, go to the [Azure portal](https://portal.azure.com), and then sign in with your Azure account.
expressroute Expressroute Howto Add Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-ipv6-powershell.md
This article describes how to add IPv6 support to connect via ExpressRoute to yo
[!INCLUDE [expressroute-cloudshell](../../includes/expressroute-cloudshell-powershell-about.md)]
-## Register for Public Preview
-Before adding IPv6 support, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
-1. Sign in to Azure and select the subscription. You must do this for the subscription containing your ExpressRoute circuit, as well as the subscription containing your Azure deployments (if they are different).
-
- ```azurepowershell-interactive
- Connect-AzAccount
-
- Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
- ```
-
-2. Request to register your subscription for Public Preview using the following command:
- ```azurepowershell-interactive
- Register-AzProviderFeature -FeatureName AllowIpv6PrivatePeering -ProviderNamespace Microsoft.Network
- ```
-
-Your request will then be approved by the ExpressRoute team within 2-3 business days.
- ## Add IPv6 Private Peering to your ExpressRoute circuit 1. [Create an ExpressRoute circuit](./expressroute-howto-circuit-arm.md) or use an existing circuit. Retrieve the circuit by running the **Get-AzExpressRouteCircuit** command:
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
If you are remote and don't have fiber connectivity or you want to explore other
| **[Axtel](https://alestra.mx/landing/expressrouteazure/)** |Equinix |Dallas| | **[Beanfield Metroconnect](https://www.beanfield.com/business/cloud-exchange)** |Megaport |Toronto| | **[Bezeq International Ltd.](https://www.bezeqint.net/english)** | euNetworks | London |
-| **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/)** | Equinix | Amsterdam, Frankfurt, London, Singapore, Washington DC |
+| **[BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/)** | Equinix | Amsterdam, Frankfurt, London, Singapore, Washington DC |
| **[BroadBand Tower, Inc.](https://www.bbtower.co.jp/product-service/data-center/network/dcconnect-for-azure/)** | Equinix | Tokyo | | **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix, Megaport | Dallas | | **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong SAR |
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules.md
Title: Configure rules and actions in Azure IoT Central | Microsoft Docs description: This how-to article shows you, as a builder, how to configure telemetry-based rules and actions in your Azure IoT Central application.-- Previously updated : 12/23/2020++ Last updated : 07/06/2021 - # This article applies to operators, builders, and administrators. # Configure rules
-Rules in IoT Central serve as a customizable response tool that trigger on actively monitored events from connected devices. The following sections describe how rules are evaluated.
+Rules in IoT Central serve as a customizable response tool that trigger on actively monitored events from connected devices. The following sections describe how rules are evaluated. You can define one or more actions that happen when a rule triggers, this article describes email, webhook, and Azure Monitor action groups. To learn about other action types, see [Use workflows to integrate your Azure IoT Central application with other cloud services](howto-configure-rules-advanced.md).
## Select target devices Use the target devices section to select on what kind of devices this rule will be applied. Filters allow you to further refine what devices should be included. The filters use properties on the device template to filter down the set of devices. Filters themselves don't trigger an action. In the following screenshot, the devices that are being targeted are of device template type **Refrigerator**. The filter states that the rule should only include **Refrigerators** where the **Manufactured State** property equals **Washington**.
-![Conditions](media/howto-configure-rules/filters.png)
## Use multiple conditions
Conditions are what rules trigger on. Currently, when you add multiple condition
In the following screenshot, the conditions check when the temperature is greater than 70&deg; F and the humidity is less than 10. When both of these statements are true, the rule evaluates to true and triggers an action.
-![Screenshot shows a refrigerator monitor with conditions specified for temperature and humidity.](media/howto-configure-rules/conditions.png)
### Use a cloud property in a value field
If you choose an event type telemetry value, the **Value** drop-down includes th
Rules evaluate aggregate time windows as tumbling windows. In the screenshot below, the time window is five minutes. Every five minutes, the rule evaluates on the last five minutes of data. The data is only evaluated once in the window to which it corresponds.
-![Tumbling Windows](media/howto-configure-rules/tumbling-window.png)
+
+## Create an email action
+
+When you create an email action, the email address must be a **user ID** in the application, and the user must have signed in to the application at least once. You can also specify a note to include in the email. IoT Central shows an example of what the email will look like when the rule triggers:
++
+## Create a webhook action
+
+Webhooks let you connect your IoT Central app to other applications and services for remote monitoring and notifications. Webhooks automatically notify other applications and services you connect whenever a rule is triggered in your IoT Central app. Your IoT Central app sends a POST request to the other application's HTTP endpoint whenever a rule triggers. The payload contains device details and rule trigger details.
+
+In this example, you connect to *RequestBin* to get notified when a rule fires:
+
+1. Open [RequestBin](https://requestbin.net/).
+
+1. Create a new RequestBin and copy the **Bin URL**.
+
+1. Add an action to your rule:
+
+ :::image type="content" source="media/howto-configure-rules/webhook-create.png" alt-text="Screenshot that shows the webhook creation screen.":::
+
+1. Choose the webhook action, enter a display name, and paste the RequestBin URL as the **Callback URL**.
+
+1. Save the rule.
+
+Now when the rule triggers, you see a new request appear in RequestBin.
+
+### Payload
+
+When a rule triggers, it makes an HTTP POST request to the callback URL. The request contains a JSON payload with the telemetry, device, rule, and application details. The payload looks like the following JSON snippet:
+
+```json
+{
+ "timestamp": "2020-04-06T00:20:15.06Z",
+ "action": {
+ "id": "<id>",
+ "type": "WebhookAction",
+ "rules": [
+ "<rule_id>"
+ ],
+ "displayName": "Webhook 1",
+ "url": "<callback_url>"
+ },
+ "application": {
+ "id": "<application_id>",
+ "displayName": "Contoso",
+ "subdomain": "contoso",
+ "host": "contoso.azureiotcentral.com"
+ },
+ "device": {
+ "id": "<device_id>",
+ "etag": "<etag>",
+ "displayName": "MXChip IoT DevKit - 1yl6vvhax6c",
+ "instanceOf": "<device_template_id>",
+ "simulated": true,
+ "provisioned": true,
+ "approved": true,
+ "cloudProperties": {
+ "City": {
+ "value": "Seattle"
+ }
+ },
+ "properties": {
+ "deviceinfo": {
+ "firmwareVersion": {
+ "value": "1.0.0"
+ }
+ }
+ },
+ "telemetry": {
+ "<interface_instance_name>": {
+ "humidity": {
+ "value": 47.33228889360127
+ }
+ }
+ }
+ },
+ "rule": {
+ "id": "<rule_id>",
+ "displayName": "Humidity monitor"
+ }
+}
+```
+
+If the rule monitors aggregated telemetry over a period of time, the payload contains a telemetry section that looks like:
+
+```json
+{
+ "telemetry": {
+ "<interface_instance_name>": {
+ "Humidity": {
+ "avg": 39.5
+ }
+ }
+ }
+}
+```
+
+### Data format change notice
+
+If you have one or more webhooks created and saved before **3 April 2020**, delete the webhook and create a new one. Older webhooks use a deprecated payload format:
+
+```json
+{
+ "id": "<id>",
+ "displayName": "Webhook 1",
+ "timestamp": "2019-10-24T18:27:13.538Z",
+ "rule": {
+ "id": "<id>",
+ "displayName": "High temp alert",
+ "enabled": true
+ },
+ "device": {
+ "id": "mx1",
+ "displayName": "MXChip IoT DevKit - mx1",
+ "instanceOf": "<device-template-id>",
+ "simulated": true,
+ "provisioned": true,
+ "approved": true
+ },
+ "data": [{
+ "@id": "<id>",
+ "@type": ["Telemetry"],
+ "name": "temperature",
+ "displayName": "Temperature",
+ "value": 66.27310467496761,
+ "interfaceInstanceName": "sensors"
+ }],
+ "application": {
+ "id": "<id>",
+ "displayName": "x - Store Analytics Checkout",
+ "subdomain": "<subdomain>",
+ "host": "<host>"
+ }
+}
+```
+
+## Create an Azure Monitor group action
+
+This section describes how to use [Azure Monitor](../../azure-monitor/overview.md) *action groups* to attach multiple actions to an IoT Central rule. You can attach an action group to multiple rules. An [action group](../../azure-monitor/alerts/action-groups.md) is a collection of notification preferences defined by the owner of an Azure subscription.
+
+You can [create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md) or with an [Azure Resource Manager template](../../azure-monitor/alerts/action-groups-create-resource-manager-template.md).
+
+An action group can:
+
+- Send notifications such as an email, an SMS, or make a voice call.
+- Run an action such as calling a webhook.
+
+The following screenshot shows an action group that sends email and SMS notifications and calls a webhook:
++
+To use an action group in an IoT Central rule, the action group must be in the same Azure subscription as the IoT Central application.
+
+When you add an action to the rule in IoT Central, select **Azure Monitor Action Groups**.
+
+Choose an action group from your Azure subscription:
++
+Select **Save**. The action group now appears in the list of actions to run when the rule is triggered.
+
+The following table summarizes the information sent to the supported action types:
+
+| Action type | Output format |
+| -- | -- |
+| Email | Standard IoT Central email template |
+| SMS | Azure IoT Central alert: ${applicationName} - "${ruleName}" triggered on "${deviceName}" at ${triggerDate} ${triggerTime} |
+| Voice | Azure I.O.T Central alert: rule "${ruleName}" triggered on device "${deviceName}" at ${triggerDate} ${triggerTime}, in application ${applicationName} |
+| Webhook | { "schemaId" : "AzureIoTCentralRuleWebhook", "data": {[regular webhook payload](howto-create-webhooks.md#payload)}} |
+
+The following text is an example SMS message from an action group:
+
+`iotcentral: Azure IoT Central alert: Contoso - "Low pressure alert" triggered on "Motion sensor 2" at March 20, 2019 10:12 UTC`
## Use rules with IoT Edge modules
-A restriction applies to rules that are applied to IoT Edge modules. Rules on telemetry from different modules aren't evaluated as valid rules. Take the following as an example. The first condition of the rule is on a temperature telemetry from Module A. The second condition of the rule is on a humidity telemetry on Module B. Since the two conditions are from different modules, this is an invalid set of conditions. The rule isn't valid and will throw an error on trying to save the rule.
+A restriction applies to rules that are applied to IoT Edge modules. Rules on telemetry from different modules aren't evaluated as valid rules. Take the following example, the first condition of the rule is on a temperature telemetry from Module A. The second condition of the rule is on a humidity telemetry on Module B. Because the two conditions are from different modules, you have an invalid set of conditions. The rule isn't valid and throws an error when you try to save the rule.
## Next steps
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-iot-central-application.md
If you select **Create app**, you can provide the necessary information to creat
The **My apps** page lists all the IoT Central applications you have access to. The list includes applications you created and applications that you've been granted access to. > [!TIP]
-> All the applications you create using a standard pricing plan on the Azure IoT Central site use the **IOTC** resource group in your subscription. The approaches decribed in the following section let you choose a resource group to use.
+> All the applications you create using a standard pricing plan on the Azure IoT Central site use the **IOTC** resource group in your subscription. The approaches described in the following section let you choose a resource group to use.
## Other approaches You can also use the following approaches to create an IoT Central application: - [Create an IoT Central application from the Azure portal](howto-manage-iot-central-from-portal.md#create-iot-central-applications)-- [Create an IoT Central application using the Azure CLI](howto-manage-iot-central-from-cli.md#create-an-application)-- [Create an IoT Central application using PowerShell](howto-manage-iot-central-from-powershell.md#create-an-application)-- [Create an IoT Central application programmatically](howto-manage-iot-central-programmatically.md)
+- [Create an IoT Central application using the command line](howto-manage-iot-central-from-cli.md#create-an-application)
+- [Create an IoT Central application programmatically](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/)
## Next steps
iot-central Howto Create Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-webhooks.md
- Title: Create webhooks on rules in Azure IoT Central | Microsoft Docs
-description: Create webhooks in Azure IoT Central to automatically notify other applications when rules fire.
-- Previously updated : 04/03/2020-----
-# This topic applies to builders and administrators.
--
-# Create webhook actions on rules in Azure IoT Central
-
-Webhooks enable you to connect your IoT Central app to other applications and services for remote monitoring and notifications. Webhooks automatically notify other applications and services you connect whenever a rule is triggered in your IoT Central app. Your IoT Central app sends a POST request to the other application's HTTP endpoint whenever a rule is triggered. The payload contains device details and rule trigger details.
-
-## Set up the webhook
-
-In this example, you connect to RequestBin to get notified when rules fire using webhooks.
-
-1. Open [RequestBin](https://requestbin.net/).
-
-1. Create a new RequestBin and copy the **Bin URL**.
-
-1. Create a [telemetry rule](tutorial-create-telemetry-rules.md). Save the rule and add a new action.
-
- ![Webhook creation screen](media/howto-create-webhooks/webhookcreate.png)
-
-1. Choose the webhook action and provide a display name and paste the Bin URL as the Callback URL.
-
-1. Save the rule.
-
-Now when the rule is triggered, you see a new request appear in RequestBin.
-
-## Payload
-
-When a rule is triggered, an HTTP POST request is made to the callback URL containing a json payload with the telemetry, device, rule, and application details. The payload could look like the following:
-
-```json
-{
- "timestamp": "2020-04-06T00:20:15.06Z",
- "action": {
- "id": "<id>",
- "type": "WebhookAction",
- "rules": [
- "<rule_id>"
- ],
- "displayName": "Webhook 1",
- "url": "<callback_url>"
- },
- "application": {
- "id": "<application_id>",
- "displayName": "Contoso",
- "subdomain": "contoso",
- "host": "contoso.azureiotcentral.com"
- },
- "device": {
- "id": "<device_id>",
- "etag": "<etag>",
- "displayName": "MXChip IoT DevKit - 1yl6vvhax6c",
- "instanceOf": "<device_template_id>",
- "simulated": true,
- "provisioned": true,
- "approved": true,
- "cloudProperties": {
- "City": {
- "value": "Seattle"
- }
- },
- "properties": {
- "deviceinfo": {
- "firmwareVersion": {
- "value": "1.0.0"
- }
- }
- },
- "telemetry": {
- "<interface_instance_name>": {
- "humidity": {
- "value": 47.33228889360127
- }
- }
- }
- },
- "rule": {
- "id": "<rule_id>",
- "displayName": "Humidity monitor"
- }
-}
-```
-If the rule monitors aggregated telemetry over a period of time, the payload will contain a different telemetry section.
-
-```json
-{
- "telemetry": {
- "<interface_instance_name>": {
- "Humidity": {
- "avg": 39.5
- }
- }
- }
-}
-```
-
-## Data format change notice
-
-If you have one or more webhooks created and saved before **3 April 2020**, you will need to delete the webhook and create a new webhook. This is because older webhooks use an older payload format that will be deprecated in the future.
-
-### Webhook payload (format deprecated as of 3 April 2020)
-
-```json
-{
- "id": "<id>",
- "displayName": "Webhook 1",
- "timestamp": "2019-10-24T18:27:13.538Z",
- "rule": {
- "id": "<id>",
- "displayName": "High temp alert",
- "enabled": true
- },
- "device": {
- "id": "mx1",
- "displayName": "MXChip IoT DevKit - mx1",
- "instanceOf": "<device-template-id>",
- "simulated": true,
- "provisioned": true,
- "approved": true
- },
- "data": [{
- "@id": "<id>",
- "@type": ["Telemetry"],
- "name": "temperature",
- "displayName": "Temperature",
- "value": 66.27310467496761,
- "interfaceInstanceName": "sensors"
- }],
- "application": {
- "id": "<id>",
- "displayName": "x - Store Analytics Checkout",
- "subdomain": "<subdomain>",
- "host": "<host>"
- }
-}
-```
-
-## Known limitations
-
-Currently there is no programmatic way of subscribing/unsubscribing from these webhooks through an API.
-
-If you have ideas for how to improve this feature, post your suggestions to our [User voice forum](https://feedback.azure.com/forums/911455-azure-iot-central).
-
-## Next steps
-
-Now that you've learned how to set up and use webhooks, the suggested next step is to explore [configuring Azure Monitor Action Groups](howto-use-action-groups.md).
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
In addition to seeing the status of your exports in IoT Central, you can use [Az
- Number of messages successfully exported to destinations. - Number of errors encountered.
-To learn more, see [Monitor the overall health of an IoT Central application](howto-monitor-application-health.md).
+To learn more, see [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
## Destinations
iot-central Howto Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-faq.md
To get information about your IoT Central application:
Use the **Copy info** button to copy the information to the clipboard.
+## How many IoT Central applications can I deploy in my subscription?
+
+Each Azure subscription has default quotas that could impact the scope of your IoT solution. Currently, IoT Central limits the number of applications you can deploy in a subscription to 10. If you need to increase this limit, contact [Microsoft support](https://azure.microsoft.com/support/options/).
+ ## How do I transfer a device from IoT Hub to IoT Central? A device can connect to an IoT hub directly using a connection string or using the [Device Provisioning Service (DPS)](../../iot-dps/about-iot-dps.md). IoT Central always uses DPS.
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-cli.md
Title: Manage IoT Central from Azure CLI | Microsoft Docs
-description: This article describes how to create and manage your IoT Central application using CLI. You can view, modify, and remove the application using CLI.
+ Title: Manage IoT Central from Azure CLI or PowerShell | Microsoft Docs
+description: This article describes how to create and manage your IoT Central application using the Azure CLI or PowerShell. You can view, modify, and remove the application using these tools.
Previously updated : 03/27/2020 Last updated : 07/06/2021 --+
-# Manage IoT Central from Azure CLI
+# Manage IoT Central from Azure CLI or Powershell
+Instead of creating and managing IoT Central applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, you can use [Azure CLI](/cli/azure/) or [Azure PowerShell](/powershell/azure/) to manage your applications.
-Instead of creating and managing IoT Central applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, you can use [Azure CLI](/cli/azure/) to manage your applications.
+If you prefer to use a language such as JavaScript, Python, C#, Ruby, or Go, see the [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository for code samples that show you how to create, update, list, and delete Azure IoT Central applications.
+## Prerequisites
+# [Azure CLI](#tab/azure-cli)
-## Create an application
+# [PowerShell](#tab/azure-powershell)
++
+> [!TIP]
+> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps#change-the-active-subscription).
+
+Run the following command to check the [IoT Central module](/powershell/module/az.iotcentral/) is installed in your PowerShell environment:
+
+```powershell
+Get-InstalledModule -name Az.I*
+```
+
+If the list of installed modules doesn't include **Az.IotCentral**, run the following command:
+
+```powershell
+Install-Module Az.IotCentral
+```
++ [!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
+## Create an application
+
+# [Azure CLI](#tab/azure-cli)
+ Use the [az iot central app create](/cli/azure/iot/central/app#az_iot_central_app_create) command to create an IoT Central application in your Azure subscription. For example: ```azurecli-interactive
These commands first create a resource group in the east US region for the appli
| template | The application template to use. For more information, see the following table. | | display-name | The name of the application as displayed in the UI. |
+# [PowerShell](#tab/azure-powershell)
+
+Use the [New-AzIotCentralApp](/powershell/module/az.iotcentral/New-AzIotCentralApp) cmdlet to create an IoT Central application in your Azure subscription. For example:
+
+```powershell
+# Create a resource group for the IoT Central application
+New-AzResourceGroup -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -Location "East US"
+```
+
+```powershell
+# Create an IoT Central application
+New-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -Name "myiotcentralapp" -Subdomain "mysubdomain" `
+ -Sku "ST1" -Template "iotc-pnp-preview" `
+ -DisplayName "My Custom Display Name"
+```
+
+The script first creates a resource group in the east US region for the application. The following table describes the parameters used with the **New-AzIotCentralApp** command:
+
+|Parameter |Description |
+|||
+|ResourceGroupName |The resource group that contains the application. This resource group must already exist in your subscription. |
+|Location |By default, this cmdlet uses the location from the resource group. Currently, you can create an IoT Central application in the **Australia**, **Asia Pacific**, **Europe**, **United States**, **United Kingdom**, and **Japan** geographies. |
+|Name |The name of the application in the Azure portal. |
+|Subdomain |The subdomain in the URL of the application. In the example, the application URL is `https://mysubdomain.azureiotcentral.com`. |
+|Sku |Currently, you can use either **ST1** or **ST2**. See [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). |
+|Template | The application template to use. For more information, see the following table. |
+|DisplayName |The name of the application as displayed in the UI. |
+++ ### Application templates [!INCLUDE [iot-central-template-list](../../../includes/iot-central-template-list.md)] If you've created your own application template, you can use it to create a new application. When asked for an application template, enter the app ID shown in the exported app's URL shareable link under the [Application template export](howto-use-app-templates.md#create-an-application-template) section of your app.
-## View your applications
+## View applications
+
+# [Azure CLI](#tab/azure-cli)
Use the [az iot central app list](/cli/azure/iot/central/app#az_iot_central_app_list) command to list your IoT Central applications and view metadata.
+# [PowerShell](#tab/azure-powershell)
+
+Use the [Get-AzIotCentralApp](/powershell/module/az.iotcentral/Get-AzIotCentralApp) cmdlet to list your IoT Central applications and view metadata.
+++ ## Modify an application
+# [Azure CLI](#tab/azure-cli)
+ Use the [az iot central app update](/cli/azure/iot/central/app#az_iot_central_app_update) command to update the metadata of an IoT Central application. For example, to change the display name of your application: ```azurecli-interactive
az iot central app update --name myiotcentralapp \
--set displayName="My new display name" ```
-## Remove an application
+# [PowerShell](#tab/azure-powershell)
+
+Use the [Set-AzIotCentralApp](/powershell/module/az.iotcentral/set-aziotcentralapp) cmdlet to update the metadata of an IoT Central application. For example, to change the display name of your application:
+
+```powershell
+Set-AzIotCentralApp -Name "myiotcentralapp" `
+ -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -DisplayName "My new display name"
+```
+++
+## Delete an application
+
+# [Azure CLI](#tab/azure-cli)
Use the [az iot central app delete](/cli/azure/iot/central/app#az_iot_central_app_delete) command to delete an IoT Central application. For example:
az iot central app delete --name myiotcentralapp \
--resource-group MyIoTCentralResourceGroup ```
+# [PowerShell](#tab/azure-powershell)
+
+Use the [Remove-AzIotCentralApp](/powershell/module/az.iotcentral/Remove-AzIotCentralApp) cmdlet to delete an IoT Central application. For example:
+
+```powershell
+Remove-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -Name "myiotcentralapp"
+```
+++ ## Next steps
-Now that you've learned how to manage Azure IoT Central applications from Azure CLI, here is the suggested next step:
+Now that you've learned how to manage Azure IoT Central applications from Azure CLI or PowerShell, here is the suggested next step:
> [!div class="nextstepaction"] > [Administer your application](howto-administer.md)
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-portal.md
Title: Manage IoT Central from the Azure portal | Microsoft Docs
-description: This article describes how to create and manage your IoT Central applications from the Azure portal.
+ Title: Manage and monitor IoT Central in the Azure portal | Microsoft Docs
+description: This article describes how to create, manage, and monitor your IoT Central applications from the Azure portal.
Previously updated : 04/17/2021 Last updated : 07/06/2021
-# Manage IoT Central from the Azure portal
+# Manage and monitor IoT Central from the Azure portal
-
-You can use the [Azure portal](https://portal.azure.com) to create and manage IoT Central applications.
+You can use the [Azure portal](https://portal.azure.com) to create, manage, and monitor IoT Central applications.
## Create IoT Central applications
You can use the [Azure portal](https://portal.azure.com) to create and manage Io
To create an application, navigate to the [IoT Central Application](https://ms.portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal:
-![Create IoT Central form](media/howto-manage-iot-central-from-portal/image6a.png)
+![Create IoT Central form](media/howto-manage-iot-central-from-portal/create-form.png)
* **Resource name** is a unique name you can choose for your IoT Central application in your Azure resource group.
To get started, search for your application in the search bar at the top of the
When you select an application in the search results, the Azure portal shows you its overview. You can navigate to the application by selecting the **IoT Central Application URL**:
-![Screenshot that shows the "Overview" page with the "IoT Central Application URL" highlighted.](media/howto-manage-iot-central-from-portal/image3.png)
+![Screenshot that shows the "Overview" page with the "IoT Central Application URL" highlighted.](media/howto-manage-iot-central-from-portal/highlight-application.png)
To move the application to a different resource group, select **change** beside the resource group. On the **Move resources** page, choose the resource group you'd like to move this application to:
-![Screenshot that shows the "Overview" page with the "Resource group (change)" highlighted.](media/howto-manage-iot-central-from-portal/image4a.png)
+![Screenshot that shows the "Overview" page with the "Resource group (change)" highlighted.](media/howto-manage-iot-central-from-portal/highlight-resource-group.png)
To move the application to a different subscription, select **change** beside the subscription. On the **Move resources** page, choose the subscription you'd like to move this application to:
-![Management portal: resource management](media/howto-manage-iot-central-from-portal/image5a.png)
+![Management portal: resource management](media/howto-manage-iot-central-from-portal/highlight-subscription.png)
+
+## Monitor application health
+
+> [!NOTE]
+> Metrics are only available for version 3 IoT Central applications. To learn how to check your application version, see [How do I get information about my application?](howto-faq.md#how-do-i-get-information-about-my-application).
+
+You can use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
+
+Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
+
+> [!TIP]
+> Applications that use the free trial plan don't have an associated Azure subscription and so don't support Azure Monitor metrics. You can [convert an application to a standard pricing plan](./howto-faq.md#how-do-i-move-from-a-free-to-a-standard-pricing-plan) and get access to these metrics.
+
+### View metrics in the Azure portal
+
+The following steps assume you have an [IoT Central application](./howto-create-iot-central-application.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-data.md).
+
+To view IoT Central metrics in the portal:
+
+1. Navigate to your IoT Central application resource in the portal. By default, IoT Central resources are located in a resource group called **IOTC**.
+1. To create a chart from your application's metrics, select **Metrics** in the **Monitoring** section.
+
+![Azure Metrics](media/howto-manage-iot-central-from-portal/metrics.png)
+
+### Azure portal permissions
+
+Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
+
+### IoT Central metrics
+
+For a list of of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
+
+### Metrics and invoices
+
+Metrics may differ from the numbers shown on your Azure IoT Central invoice. This situation occurs for a number of reasons such as:
+
+* IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics.
+
+* IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. You may choose to validate your device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
+
+* While metrics may show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).
## Next steps
-Now that you've learned how to manage Azure IoT Central applications from the Azure portal, here is the suggested next step:
+Now that you've learned how to manage and monitor Azure IoT Central applications from the Azure portal, here is the suggested next step:
> [!div class="nextstepaction"] > [Administer your application](howto-administer.md)
iot-central Howto Manage Iot Central From Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-powershell.md
- Title: Manage IoT Central from Azure PowerShell | Microsoft Docs
-description: This article describes how to create and manage your IoT Central applications from Azure PowerShell.
---- Previously updated : 03/27/2020-----
-# Manage IoT Central from Azure PowerShell
--
-Instead of creating and managing IoT Central applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, you can use [Azure PowerShell](/powershell/azure/) to manage your applications.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
---
-If you prefer to run Azure PowerShell on your local machine, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps). When you run Azure PowerShell locally, use the **Connect-AzAccount** cmdlet to sign in to Azure before you try the cmdlets in this article.
-
-> [!TIP]
-> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps#change-the-active-subscription).
-
-## Install the IoT Central module
-
-Run the following command to check the [IoT Central module](/powershell/module/az.iotcentral/) is installed in your PowerShell environment:
-
-```powershell
-Get-InstalledModule -name Az.I*
-```
-
-If the list of installed modules doesn't include **Az.IotCentral**, run the following command:
-
-```powershell
-Install-Module Az.IotCentral
-```
-
-## Create an application
-
-Use the [New-AzIotCentralApp](/powershell/module/az.iotcentral/New-AzIotCentralApp) cmdlet to create an IoT Central application in your Azure subscription. For example:
-
-```powershell
-# Create a resource group for the IoT Central application
-New-AzResourceGroup -ResourceGroupName "MyIoTCentralResourceGroup" `
- -Location "East US"
-```
-
-```powershell
-# Create an IoT Central application
-New-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
- -Name "myiotcentralapp" -Subdomain "mysubdomain" `
- -Sku "ST1" -Template "iotc-pnp-preview" `
- -DisplayName "My Custom Display Name"
-```
-
-The script first creates a resource group in the east US region for the application. The following table describes the parameters used with the **New-AzIotCentralApp** command:
-
-|Parameter |Description |
-|||
-|ResourceGroupName |The resource group that contains the application. This resource group must already exist in your subscription. |
-|Location |By default, this cmdlet uses the location from the resource group. Currently, you can create an IoT Central application in the **Australia**, **Asia Pacific**, **Europe**, **United States**, **United Kingdom**, and **Japan** geographies. |
-|Name |The name of the application in the Azure portal. |
-|Subdomain |The subdomain in the URL of the application. In the example, the application URL is `https://mysubdomain.azureiotcentral.com`. |
-|Sku |Currently, you can use either **ST1** or **ST2**. See [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). |
-|Template | The application template to use. For more information, see the following table. |
-|DisplayName |The name of the application as displayed in the UI. |
-
-### Application templates
--
-If you've created your own application template, you can use it to create a new application. When asked for an application template, enter the app ID shown in the exported app's URL shareable link under the [Application template export](howto-use-app-templates.md#create-an-application-template) section of your app.
-
-## View your IoT Central applications
-
-Use the [Get-AzIotCentralApp](/powershell/module/az.iotcentral/Get-AzIotCentralApp) cmdlet to list your IoT Central applications and view metadata.
-
-## Modify an application
-
-Use the [Set-AzIotCentralApp](/powershell/module/az.iotcentral/set-aziotcentralapp) cmdlet to update the metadata of an IoT Central application. For example, to change the display name of your application:
-
-```powershell
-Set-AzIotCentralApp -Name "myiotcentralapp" `
- -ResourceGroupName "MyIoTCentralResourceGroup" `
- -DisplayName "My new display name"
-```
-
-## Remove an application
-
-Use the [Remove-AzIotCentralApp](/powershell/module/az.iotcentral/Remove-AzIotCentralApp) cmdlet to delete an IoT Central application. For example:
-
-```powershell
-Remove-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
- -Name "myiotcentralapp"
-```
-
-## Next steps
-
-Now that you've learned how to manage Azure IoT Central applications from Azure PowerShell, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Administer your application](howto-administer.md)
iot-central Howto Manage Iot Central Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-programmatically.md
- Title: Manage IoT Central programmatically | Microsoft Docs
-description: This article describes how to create and manage your IoT Central programmatically. You can view, modify, and remove the application using multiple language SDKs such as JavaScript, Python, C#, Ruby, and Go.
---- Previously updated : 12/23/2020--
-# Manage IoT Central programmatically
--
-Instead of creating and managing IoT Central applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, you can manage your applications programmatically using the Azure SDKs. Supported languages include JavaScript, Python, C#, Ruby, and Go.
-
-## Install the SDK
-
-The following table lists the SDK repositories and package installation commands:
-
-| SDK repository | Package install |
-| -- | |
-| [Azure IotCentralClient SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/iotcentral/arm-iotcentral) | `npm install @azure/arm-iotcentral` |
-| [Microsoft Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/iothub/azure-mgmt-iotcentral/azure/mgmt/iotcentral) | `pip install azure-mgmt-iotcentral` |
-| [Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/iotcentral/Microsoft.Azure.Management.IotCentral) | `dotnet add package Microsoft.Azure.Management.IotCentral` |
-| [Microsoft Azure SDK for Ruby - Resource Management (preview)](https://github.com/Azure/azure-sdk-for-ruby/tree/master/management/azure_mgmt_iot_central/lib/2018-09-01/generated/azure_mgmt_iot_central) | `gem install azure_mgmt_iot_central` |
-| [Azure SDK for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/iotcentral) | [Maven package](https://search.maven.org/search?q=a:azure-mgmt-iotcentral) |
-| [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go/tree/master/services/iotcentral/mgmt/2018-09-01/iotcentral) | [Package releases](https://github.com/Azure/azure-sdk-for-go/releases) |
-
-## Samples
-
-The [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository has code samples for multiple programming languages that show you how to create, update, list, and delete Azure IoT Central applications.
--
-## Next steps
-
-Now that you've learned how to manage Azure IoT Central applications programmatically, a suggested next step is to learn more about the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) service.
iot-central Howto Monitor Application Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
- Title: Monitor the health of an Azure IoT Central application | Microsoft Docs
-description: Monitor the overall health of the devices connected to your IoT Central application.
-- Previously updated : 01/27/2021----
-#Customer intent: As an operator, I want to monitor the overall health of the devices and data exports in my IoT Central application.
--
-# Monitor the overall health of an IoT Central application
-
-> [!NOTE]
-> Metrics are only available for version 3 IoT Central applications. To learn how to check your application version, see [How do I get information about my application?](howto-faq.md#how-do-i-get-information-about-my-application).
-
-In this article, you learn how to use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
-
-Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
-
-### Trial applications
-
-Applications that use the free trial plan don't have an associated Azure subscription and so don't support Azure Monitor metrics. You can [convert an application to a standard pricing plan](./howto-faq.md#how-do-i-move-from-a-free-to-a-standard-pricing-plan) and get access to these metrics.
-
-## View metrics in the Azure portal
-
-The following steps assume you have an [IoT Central application](./howto-create-iot-central-application.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-data.md).
-
-To view IoT Central metrics in the portal:
-
-1. Navigate to your IoT Central application resource in the portal. By default, IoT Central resources are located in a resource group called **IOTC**.
-1. To create a chart from your application's metrics, select **Metrics** in the **Monitoring** section.
-
-![Azure Metrics](media/howto-monitor-application-health/metrics.png)
-
-### Azure portal permissions
-
-Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
-
-## IoT Central metrics
-
-For a list of of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
-
-### Metrics and invoices
-
-Metrics may differ from the numbers shown on your Azure IoT Central invoice. This situation occurs for a number of reasons such as:
--- IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics.--- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. You may choose to validate your device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.--- While metrics may show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).-
-## Next steps
-
-Now that you've learned how to use application templates, the suggested next step is to learn how to [Manage IoT Central from the Azure portal](howto-manage-iot-central-from-portal.md).
iot-central Howto Use Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-action-groups.md
- Title: Run multiple actions from an Azure IoT Central rule | Microsoft Docs
-description: Run multiple actions from a single IoT Central rule and create reusable groups of actions that you can run from multiple rules.
--- Previously updated : 12/06/2019----
-# This article applies to builders and administrators.
--
-# Group multiple actions to run from one or more rules
-
-In Azure IoT Central, you create rules to run actions when a condition is met. Rules are based on device telemetry or events. For example, you can notify an operator when the temperature of a device exceeds a threshold. This article describes how to use [Azure Monitor](../../azure-monitor/overview.md) *action groups* to attach multiple actions to an IoT Central rule. You can attach an action group to multiple rules. An [action group](../../azure-monitor/alerts/action-groups.md) is a collection of notification preferences defined by the owner of an Azure subscription.
-
-## Prerequisites
--- An application created using a standard pricing plan-- An Azure account and subscription to create and manage Azure Monitor action groups-
-## Create action groups
-
-You can [create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md) or with an [Azure Resource Manager template](../../azure-monitor/alerts/action-groups-create-resource-manager-template.md).
-
-An action group can:
--- Send notifications such as an email, an SMS, or make a voice call.-- Run an action such as calling a webhook.-
-The following screenshot shows an action group that sends email and SMS notifications and calls a webhook:
-
-![Action group](media/howto-use-action-groups/actiongroup.png)
-
-To use an action group in an IoT Central rule, the action group must be in the same Azure subscription as the IoT Central application.
-
-## Use an action group
-
-To use an action group in your IoT Central application, first create a rule. When you add an action to the rule, select **Azure Monitor Action Groups**:
-
-![Choose action](media/howto-use-action-groups/chooseaction.png)
-
-Choose an action group from your Azure subscription:
-
-![Choose action group](media/howto-use-action-groups/chooseactiongroup.png)
-
-Select **Save**. The action group now appears in the list of actions to run when the rule is triggered:
-
-![Saved action group](media/howto-use-action-groups/savedactiongroup.png)
-
-The following table summarizes the information sent to the supported action types:
-
-| Action type | Output format |
-| -- | -- |
-| Email | Standard IoT Central email template |
-| SMS | Azure IoT Central alert: ${applicationName} - "${ruleName}" triggered on "${deviceName}" at ${triggerDate} ${triggerTime} |
-| Voice | Azure I.O.T Central alert: rule "${ruleName}" triggered on device "${deviceName}" at ${triggerDate} ${triggerTime}, in application ${applicationName} |
-| Webhook | { "schemaId" : "AzureIoTCentralRuleWebhook", "data": {[regular webhook payload](howto-create-webhooks.md#payload)}} |
-
-The following text is an example SMS message from an action group:
-
-`iotcentral: Azure IoT Central alert: Contoso - "Low pressure alert" triggered on "Motion sensor 2" at March 20, 2019 10:12 UTC`
-
-## Next steps
-
-Now that you've learned how to use action groups with rules, the suggested next step is to learn how to [manage your devices](howto-manage-devices.md).
iot-central Howto Use App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-app-templates.md
To update your application template, change the template name or description on
## Next steps
-Now that you've learned how to use application templates, the suggested next step is to learn how to [Monitor the overall health of the devices connected to an IoT Central application](howto-monitor-application-health.md)
+Now that you've learned how to use application templates, the suggested next step is to learn how to [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
An administrator can use IoT Central metrics to assess the health of connected d
To view the metrics, an administrator can use charts in the Azure portal, a REST API, or PowerShell or Azure CLI queries.
-To learn more, see [Monitor the overall health of an IoT Central application](howto-monitor-application-health.md).
+To learn more, see [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
## Tools Many of the tools you use as an administrator are available in the **Administration** section of each IoT Central application. You can also use the following tools to complete some administrative tasks: -- [Azure CLI](howto-manage-iot-central-from-cli.md)-- [Azure PowerShell](howto-manage-iot-central-from-powershell.md)
+- [Azure command line](howto-manage-iot-central-from-cli.md)
- [Azure portal](howto-manage-iot-central-from-portal.md) ## Next steps
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-solution-builder.md
When a device connects to an IoT Central, the device is associated with a device
A solution developer can add rules to an IoT Central application that run customizable actions. Rules evaluate conditions, based on data coming from a device, to determine when to run an action. To learn more about rules, see: - [Tutorial: Create a rule and set up notifications in your Azure IoT Central application](tutorial-create-telemetry-rules.md)-- [Create webhook actions on rules in Azure IoT Central](howto-create-webhooks.md)-- [Group multiple actions to run from one or more rules](howto-use-action-groups.md)
+- [Configure rules](howto-configure-rules.md)
IoT Central has built-in analytics capabilities that an operator can use to analyze the data flowing from the connected devices. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
You can create IoT Central application using a 7-day free trial, or use a standa
- Applications you create using the *free* plan are free for seven days and support up to five devices. You can convert them to use a standard pricing plan at any time before they expire. - Applications you create using the *standard* plan are billed on a per device basis, you can choose either **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing).
-## Quotas
-
-Each Azure subscription has default quotas that could impact the scope of your IoT solution. Currently, IoT Central limits the number of applications you can deploy in a subscription to 10. If you need to increase this limit, contact [Microsoft support](https://azure.microsoft.com/support/options/).
-
-## Known issues
--- Continuous data export doesn't support the Avro format (incompatibility).-- GeoJSON isn't currently supported.-- Map tile isn't currently supported.-- Array schema types aren't supported.-- Only the C device SDK and the Node.js device and service SDKs are supported.-- IoT Central is currently available in the United States, Europe, Asia Pacific, Australia, United Kingdom, and Japan locations.- ## Next steps Now that you have an overview of IoT Central, here are some suggested next steps:
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-create-telemetry-rules.md
In this tutorial, you learned how to:
Now that you've defined a threshold-based rule the suggested next step is to learn how to: > [!div class="nextstepaction"]
-> [Create webhooks on rules](./howto-create-webhooks.md).
+> [Configure rules](howto-configure-rules.md)
load-balancer Load Balancer Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-insights.md
Title: Insights for Azure Load Balancer
description: Use the load balancer insights to achieve rapid fault localization and informed design decisions documentationcenter: na-+ ms.devlang: na na Last updated 10/27/2020-+ # Using Insights to monitor and configure your Azure Load Balancer
load-balancer Troubleshoot Rhc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/troubleshoot-rhc.md
Title: Troubleshoot Azure Load Balancer resource health, frontend, and backend a
description: Use the available metrics to diagnose your degraded or unavailable Azure Standard Load Balancer. documentationcenter: na-+ ms.devlang: na na Last updated 08/14/2020-+ # Troubleshoot resource health, and inbound availability issues
logic-apps Call From Power Automate Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/call-from-power-automate-power-apps.md
If you want to migrate your flow from Power Automate or Power to Logic Apps inst
> [Power Automate connectors](/connectors/connector-reference/connector-reference-powerautomate-connectors). > > * To find which Logic Apps connectors don't have Power Automate equivalents, see
-> [Logic Apps connectors](/connectors/connector-reference/connector-reference-powerautomate-connectors).
+> [Logic Apps connectors](/connectors/connector-reference/connector-reference-logicapps-connectors).
## Prerequisites
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
In this article, learn about Azure Machine Learning releases. For the full SDK
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2021-07-06
+
+### Azure Machine Learning SDK for Python v1.32.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Expose diagnose workspace health in SDK/CLI
+ + **azureml-defaults**
+ + Added `opencensus-ext-azure==1.0.8` dependency to azureml-defaults
+ + **azureml-pipeline-core**
+ + Updated the AutoMLStep to use prebuilt images when the environment for job submission matches the default environment
+ + **azureml-responsibleai**
+ + New error analysis client added to upload, download and list error analysis reports
+ + Ensure `raiwidgets` and `responsibleai` packages are version synchronized
+ + **azureml-train-automl-runtime**
+ + Set the time allocated to dynamically search across various featurization strategies to a maximum of one-fourth of the overall experiment timeout
++ ## 2021-06-21 ### Azure Machine Learning SDK for Python v1.31.0
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Updated experiment URO to use experiment ID. + Bug fixes for attaching remote compute with AzureML CLI. + Updated portal URIs to include tenant for authentication.
- + Updated experiment URI to use experiment Id.
+ + Updated experiment URI to use experiment ID.
+ **azureml-interpret** + azureml-interpret updated to use interpret-community 0.17.0 + **azureml-opendatasets**
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ **azure-cli-ml** + Grid Profiling removed from the SDK and is not longer supported. + **azureml-accel-models**
- + azureml-accel-models package now supports Tensorflow 2.x
+ + azureml-accel-models package now supports TensorFlow 2.x
+ **azureml-automl-core** + Added error handling in get_output for cases when local versions of pandas/sklearn don't match the ones used during training + **azureml-automl-runtime**
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ **azureml-train-core** + Users must now provide a valid hyperparameter_sampling arg when creating a HyperDriveConfig. In addition, the documentation for HyperDriveRunConfig has been edited to inform users of the deprecation of HyperDriveRunConfig. + Reverting PyTorch Default Version to 1.4.
- + Adding PyTorch 1.6 & Tensorflow 2.2 images and curated environment.
+ + Adding PyTorch 1.6 & TensorFlow 2.2 images and curated environment.
### Azure Machine Learning Studio Notebooks Experience (August Update) + **New features**
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Enable guardrails for forecasting missing value imputations. + Improved logging in AutoML + Added fine grained error handling for data prep exceptions
- + Removing restrictions on phrophet and xgboost models when trained on remote compute.
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`. + The error handling for custom featurization in forecasting tasks was improved. + The forecasting data set frequency detection mechanism was improved.
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Support for cv_split_column_names to be used with training_data + Deprecated azureml.dprep.Dataflow as a valid type for input data. + Updated Mac to rely on cudatoolkit=9.0 as it is not available at version 10 yet.
- + Removing restrictions on phrophet and xgboost models when trained on remote compute.
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`. + Added functionality to allow users to include lagged features to generate forecasts. + **azureml-train-automl-runtime** + Improved logging in AutoML + Added fine grained error handling for data prep exceptions
- + Removing restrictions on phrophet and xgboost models when trained on remote compute.
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`. + Updates to error message to correctly display user error. + Support for cv_split_column_names to be used with training_data
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Speed up Prophet/AutoArima model in AutoML forecasting by enabling parallel fitting for the time series when data sets have multiple time series. In order to benefit from this new feature, you are recommended to set "max_cores_per_iteration = -1" (that is, using all the available cpu cores) in AutoMLConfig. + Fix KeyError on printing guardrails in console interface + Fixed error message for experimentation_timeout_hours
- + Deprecated Tensorflow models for AutoML.
+ + Deprecated TensorFlow models for AutoML.
+ **azureml-automl-runtime** + Fixed error message for experimentation_timeout_hours + Fixed unclassified exception when trying to deserialize from cache store
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ **azureml-pipeline-core** + Allowing the option to regenerate_outputs when using a module that is embedded in a ModuleStep. + **azureml-train-automl-client**
- + Deprecated Tensorflow models for AutoML.
+ + Deprecated TensorFlow models for AutoML.
+ Fix users allow listing unsupported algorithms in local mode + Doc fixes to AutoMLConfig. + Enforcing datatype checks on cv_split_indices input in AutoMLConfig.
Access the following web-based authoring tools from the studio:
+ Moved the `AutoMLStep` to the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`. + Added documentation example for dataset as PythonScriptStep input + **azureml-tensorboard**
- + updated azureml-tensorboard to support tensorflow 2.0
- + Show correct port number when using a custom Tensorboard port on a Compute Instance
+ + Updated azureml-tensorboard to support TensorFlow 2.0
+ + Show correct port number when using a custom TensorBoard port on a Compute Instance
+ **azureml-train-automl-client** + Fixed an issue where certain packages may be installed at incorrect versions on remote runs. + fixed FeaturizationConfig overriding issue that filters custom featurization config.
Access the following web-based authoring tools from the studio:
+ **azureml-contrib-pipeline-steps** + Optional parameter side_inputs added to ParallelRunStep. This parameter can be used to mount folder on the container. Currently supported types are DataReference and PipelineData. + **azureml-tensorboard**
- + updated azureml-tensorboard to support tensorflow 2.0
+ + Updated azureml-tensorboard to support TensorFlow 2.0
+ **azureml-train-automl-client**
- + fixed FeaturizationConfig overriding issue that filters custom featurization config.
+ + Fixed FeaturizationConfig overriding issue that filters custom featurization config.
+ **azureml-train-automl-runtime** + Moved the `AutoMLStep` in the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`. + **azureml-train-core**
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ Added dataset CLI. For more information: `az ml dataset --help` + Added support for deploying and packaging supported models (ONNX, scikit-learn, and TensorFlow) without an InferenceConfig instance. + Added overwrite flag for service deployment (ACI and AKS) in SDK and CLI. If provided, will overwrite the existing service if service with name already exists. If service doesn't exist, will create new service.
- + Models can be registered with two new frameworks, Onnx and Tensorflow. - Model registration accepts sample input data, sample output data and resource configuration for the model.
+ + Models can be registered with two new frameworks, Onnx and TensorFlow. - Model registration accepts sample input data, sample output data and resource configuration for the model.
+ **azureml-automl-core** + Training an iteration would run in a child process only when runtime constraints are being set. + Added a guardrail for forecasting tasks, to check whether a specified max_horizon will cause a memory issue on the given machine or not. If it will, a guardrail message will be displayed.
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ Updated the minimum required data size for Cross-validation to guarantee a minimum of two samples in each validation fold. + **azureml-cli-common** + CLI now supports model packaging.
- + Models can be registered with two new frameworks, Onnx and Tensorflow.
+ + Models can be registered with two new frameworks, Onnx and TensorFlow.
+ Model registration accepts sample input data, sample output data and resource configuration for the model. + **azureml-contrib-gbdt** + fixed the release channel for the notebook
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ Allow intermediate data in Azure Machine Learning Pipeline to be converted to tabular dataset and used in [`AutoMLStep`](/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automlstep). + Added support for deploying and packaging supported models (ONNX, scikit-learn, and TensorFlow) without an InferenceConfig instance. + Added overwrite flag for service deployment (ACI and AKS) in SDK and CLI. If provided, will overwrite the existing service if service with name already exists. If service doesn't exist, will create new service.
- + Models can be registered with two new frameworks, Onnx and Tensorflow. Model registration accepts sample input data, sample output data and resource configuration for the model.
+ + Models can be registered with two new frameworks, Onnx and TensorFlow. Model registration accepts sample input data, sample output data and resource configuration for the model.
+ Added new datastore for Azure Database for MySQL. Added example for using Azure Database for MySQL in DataTransferStep in Azure Machine Learning Pipelines. + Added functionality to add and remove tags from experiments Added functionality to remove tags from runs + Added overwrite flag for service deployment (ACI and AKS) in SDK and CLI. If provided, will overwrite the existing service if service with name already exists. If service doesn't exist, will create new service.
At the time, of this release, the following browsers are supported: Chrome, Fire
+ Improve NoaaIsdWeather enrich performance in non-SPARK version significantly. + **azureml-pipeline-steps** + DBFS Datastore is now supported for Inputs and Outputs in DatabricksStep.
- + Updated documentation for Azure Batch Step with regards to inputs/outputs.
+ + Updated documentation for Azure Batch Step with regard to inputs/outputs.
+ In AzureBatchStep, changed *delete_batch_job_after_finish* default value to *true*. + **azureml-telemetry** + Move azureml-contrib-opendatasets to azureml-opendatasets.
At the time, of this release, the following browsers are supported: Chrome, Fire
+ Parameter hash_paths for all pipeline steps is deprecated and will be removed in future. By default contents of the source_directory is hashed (except files listed in `.amlignore` or `.gitignore`) + Continued improving Module and ModuleStep to support compute type-specific modules, to prepare for RunConfiguration integration and other changes to unlock compute type-specific module usage in pipelines. + **azureml-pipeline-steps**
- + AzureBatchStep: Improved documentation with regards to inputs/outputs.
+ + AzureBatchStep: Improved documentation with regard to inputs/outputs.
+ AzureBatchStep: Changed delete_batch_job_after_finish default value to true. + **azureml-train-core** + Strings are now accepted as compute target for Automated Hyperparameter Tuning.
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
Previously updated : 11/03/2020 Last updated : 07/06/2021
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
Previously updated : 09/22/2020 Last updated : 07/06/2021 # Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
Previously updated : 07/31/2020 Last updated : 07/06/2021 # Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
To create and work with datasets, you need:
**OR**
- * Work on your own Jupyter notebook and install the SDK yourself with [these instructions](/python/api/overview/azure/ml/install).
+ * Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install).
> [!NOTE] > Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux (7, 8), Ubuntu (14.04, 16.04, 18.04), Fedora (27, 28), Debian (8, 9), and CentOS (7). If you are using unsupported distros, please follow [this guide](/dotnet/core/install/linux) to install .NET Core 2.1 to proceed.
managed-instance-apache-cassandra Dual Write Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/dual-write-proxy-migration.md
Title: Live migrate to Azure Managed Instance for Apache Cassandra using Apache Spark and a dual-write proxy.
-description: Learn how to migrate to Azure Managed Instance for Apache Cassandra using Apache Spark and a dual-write proxy.
+ Title: Live migration to Azure Managed Instance for Apache Cassandra using Apache Spark and a dual-write proxy
+description: Learn how to migrate to Azure Managed Instance for Apache Cassandra by using Apache Spark and a dual-write proxy.
Last updated 06/02/2021
-# Live migration to Azure Managed Instance for Apache Cassandra using dual-write proxy
+# Live migration to Azure Managed Instance for Apache Cassandra by using a dual-write proxy
> [!IMPORTANT] > Azure Managed Instance for Apache Cassandra is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Where possible, we recommend using Apache Cassandra native capability to migrate data from your existing cluster into Azure Managed Instance for Apache Cassandra by configuring a [hybrid cluster](configure-hybrid-cluster.md). This will use Apache Cassandra's gossip protocol to replicate data from your source data-center into your new managed instance datacenter in a seamless way. However, there may be some scenarios where your source database version is not compatible, or a hybrid cluster setup is otherwise not feasible. This article describes how to migrate data to Azure Managed Instance for Apache Cassandra in a live fashion using a [dual-write proxy](https://github.com/Azure-Samples/cassandra-proxy) and Apache Spark. The benefits of this approach are:
+Where possible, we recommend using the Apache Cassandra native capability to migrate data from your existing cluster into Azure Managed Instance for Apache Cassandra by configuring a [hybrid cluster](configure-hybrid-cluster.md). This capability uses Apache Cassandra's gossip protocol to replicate data from your source datacenter into your new managed-instance datacenter in a seamless way. However, there might be some scenarios where your source database version is not compatible, or a hybrid cluster setup is otherwise not feasible.
-- **minimal application changes** - the proxy can accept connections from your application code with little or no configuration changes, and will route all requests to your source database, and asynchronously route writes to a secondary target. -- **client wire protocol dependent** - since this approach is not dependent on backend resources or internal protocols, it can be used with any source or target Cassandra system that implements the Apache Cassandra wire protocol.
+This article describes how to migrate data to Azure Managed Instance for Apache Cassandra in a live fashion by using a [dual-write proxy](https://github.com/Azure-Samples/cassandra-proxy) and Apache Spark. The benefits of this approach are:
-The image below illustrates the approach.
+- **Minimal application changes**. The proxy can accept connections from your application code with few or no configuration changes. It will route all requests to your source database and asynchronously route writes to a secondary target.
+- **Client wire protocol dependency**. Because this approach is not dependent on back-end resources or internal protocols, it can be used with any source or target Cassandra system that implements the Apache Cassandra wire protocol.
+The following image illustrates the approach.
+ ## Prerequisites
-* Provision an Azure Managed Instance for Apache Cassandra cluster using [Azure portal](create-cluster-portal.md) or [Azure CLI](create-cluster-cli.md) and ensure you can [connect to your cluster with CQLSH](./create-cluster-portal.md#connecting-to-your-cluster).
+* Provision an Azure Managed Instance for Apache Cassandra cluster by using the [Azure portal](create-cluster-portal.md) or the [Azure CLI](create-cluster-cli.md). Ensure that you can [connect to your cluster with CQLSH](./create-cluster-portal.md#connecting-to-your-cluster).
-* [Provision an Azure Databricks account inside your Managed Cassandra VNet](deploy-cluster-databricks.md). Ensure it also has network access to your source Cassandra cluster. We will create a Spark cluster in this account for the historic data load.
+* [Provision an Azure Databricks account inside your Managed Cassandra virtual network](deploy-cluster-databricks.md). Ensure that the account has network access to your source Cassandra cluster. We'll create a Spark cluster in this account for the historical data load.
-* Ensure you've already migrated the keyspace/table scheme from your source Cassandra database to your target Cassandra Managed Instance database.
+* Ensure that you've already migrated the keyspace/table scheme from your source Cassandra database to your target Cassandra managed-instance database.
## Provision a Spark cluster We recommend selecting Azure Databricks runtime version 7.5, which supports Spark 3.0. ## Add Spark dependencies You need to add the Apache Spark Cassandra Connector library to your cluster to connect to both native and Azure Cosmos DB Cassandra endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in Maven coordinates. Select **Install**, and then restart the cluster when installation is complete. > [!NOTE]
-> Make sure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
+> Be sure to restart the Azure Databricks cluster after the Cassandra Connector library is installed.
-## Install Dual-write proxy
+## Install the dual-write proxy
For optimal performance during dual writes, we recommend installing the proxy on all nodes in your source Cassandra cluster.
cd cassandra-proxy
mvn package ```
-## Start Dual-write proxy
+## Start the dual-write proxy
-It is recommended that you install the proxy on all nodes in your source Cassandra cluster. At minimum, you need to run the following command in order to start the proxy on each node. Replace `<target-server>` with an IP or server address from one of the nodes in the target cluster. Replace `<path to JKS file>` with path to a local jks file, and `<keystore password>` with the corresponding password:
+We recommend that you install the proxy on all nodes in your source Cassandra cluster. At minimum, run the following command to start the proxy on each node. Replace `<target-server>` with an IP or server address from one of the nodes in the target cluster. Replace `<path to JKS file>` with path to a local .jks file, and replace `<keystore password>` with the corresponding password.
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> ```
-Starting the proxy in this way assumes the following are true:
+Starting the proxy in this way assumes that the following are true:
-- source and target endpoints have the same username and password-- source and target endpoints implement SSL
+- Source and target endpoints have the same username and password.
+- Source and target endpoints implement Secure Sockets Layer (SSL).
-If your source and target endpoints cannot meet these criteria, read below for further configuration options.
+If your source and target endpoints can't meet these criteria, read on for further configuration options.
### Configure SSL
-For SSL, you can either implement an existing keystore (for example the one used by your source cluster), or you can create self-signed certificate using keytool:
+For SSL, you can either implement an existing keystore (for example, the one that your source cluster uses) or create a self-signed certificate by using `keytool`:
```bash keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048 ```
-You can also disable SSL for source or target endpoints if they do not implement SSL. Use the `--disable-source-tls` or `--disable-target-tls` flags:
+You can also disable SSL for source or target endpoints if they don't implement SSL. Use the `--disable-source-tls` or `--disable-target-tls` flags:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> --disable-source-tls true --disable-target-tls true ``` > [!NOTE]
-> Make sure your client application uses the same keystore and password as the one used for the dual-write proxy when building SSL connections to the database via the proxy.
+> Make sure your client application uses the same keystore and password as the ones used for the dual-write proxy when you're building SSL connections to the database via the proxy.
-### Configure credentials and port
+### Configure the credentials and port
-By default, the source credentials will be passed through from your client app, and used by the proxy for making connections to the source and target clusters. As mentioned above, this assumes that source and target credentials are the same. If necessary, you can specify a different username and password for the target Cassandra endpoint separately when starting the proxy:
+By default, the source credentials will be passed through from your client app. The proxy will use the credentials for making connections to the source and target clusters. As mentioned earlier, this process assumes that the source and target credentials are the same. If necessary, you can specify a different username and password for the target Cassandra endpoint separately when starting the proxy:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> ```
-The default source and target ports, when not specified, will be `9042`. If either the target or source Cassandra endpoints run on a different port, you can use `--source-port` or `--target-port` to specify a different port number.
+The default source and target ports, when not specified, will be 9042. If either the target or the source Cassandra endpoint runs on a different port, you can use `--source-port` or `--target-port` to specify a different port number:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> ```
-### Deploy proxy remotely
+### Deploy the proxy remotely
-There may be circumstances in which you do not want to install the proxy on the cluster nodes themselves, and prefer to install it on a separate machine. In that scenario, you would need need to specify the IP of the `<source-server>`:
+There might be circumstances in which you don't want to install the proxy on the cluster nodes themselves, and you prefer to install it on a separate machine. In that scenario, you need to specify the IP address of `<source-server>`:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar <source-server> <destination-server> ``` > [!NOTE]
-> If you do not install and run the proxy on all nodes in a native Apache Cassandra cluster, this will impact performance in your application as the client driver will no be able to open connections to all nodes within the cluster.
+> If you don't install and run the proxy on all nodes in a native Apache Cassandra cluster, this will affect performance in your application. The client driver won't be able to open connections to all nodes within the cluster.
### Allow zero application code changes
-By default, the proxy listens on port `29042`. This requires the application code to be changed to point to this port. However, you can also change the port the proxy listens on. You may wish to do this if you want to eliminate application level code changes by having the source Cassandra server run on a different port, and have the proxy run on the standard Cassandra port `9042`:
+By default, the proxy listens on port 29042. The application code must be changed to point to this port. However, you can change the port that the proxy listens on. You might do this if you want to eliminate application-level code changes by:
+
+- Having the source Cassandra server run on a different port.
+- Having the proxy run on the standard Cassandra port 9042.
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --proxy-port 9042 ``` > [!NOTE]
-> Installing the proxy on cluster nodes does not require restart of the nodes. However, if you have many application clients and prefer to have the proxy running on the standard Cassandra port `9042` in order to eliminate any application level code changes, you would need to change the [Apache Cassandra default port](https://cassandra.apache.org/doc/latest/faq/#what-ports-does-cassandra-use). You would then need to restart the nodes in your cluster, and configure the source port to be the new port you have defined for your source Cassandra cluster. In the below example, we change the source Cassandra cluster to run on port 3074, and start the cluster on port 9042.
+> Installing the proxy on cluster nodes does not require restart of the nodes. However, if you have many application clients and prefer to have the proxy running on the standard Cassandra port 9042 in order to eliminate any application-level code changes, you need to change the [Apache Cassandra default port](https://cassandra.apache.org/doc/latest/faq/#what-ports-does-cassandra-use). You then need to restart the nodes in your cluster, and configure the source port to be the new port that you defined for your source Cassandra cluster.
+>
+> In the following example, we change the source Cassandra cluster to run on port 3074, and we start the cluster on port 9042:
+>
>```bash >java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --proxy-port 9042 --source-port 3074 >``` ### Force protocols
-The proxy has functionality to force protocols which may be necessary if the source endpoint is more advanced then the target, or otherwise unsupported. In that case you can specify `--protocol-version` and `--cql-version` to force protocol to comply with the target:
+The proxy has functionality to force protocols, which might be necessary if the source endpoint is more advanced than the target or is otherwise unsupported. In that case, you can specify `--protocol-version` and `--cql-version` to force the protocol to comply with the target:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --protocol-version 4 --cql-version 3.11 ```
-Once you have the dual-write proxy up and running, then you will need to change port on your application client and restart (or change Cassandra port and restart cluster if you have chosen this approach). The proxy will then start forwarding writes to the target endpoint. You can learn about [monitoring and metrics](https://github.com/Azure-Samples/cassandra-proxy#monitoring) available in the proxy tool.
+After the dual-write proxy is running, you'll need to change the port on your application client and restart. (Or change the Cassandra port and restart the cluster if you've chosen that approach.) The proxy will then start forwarding writes to the target endpoint. You can learn about [monitoring and metrics](https://github.com/Azure-Samples/cassandra-proxy#monitoring) available in the proxy tool.
-## Run the historic data load.
+## Run the historical data load
-To load the data, create a Scala notebook in your Databricks account. Replace your source and target Cassandra configurations with the corresponding credentials, and source and target keyspaces and tables. Add more variables for each table as required to the below sample, then run. After your application has started sending requests to the dual-write proxy, you are ready to migrate historic data.
+To load the data, create a Scala notebook in your Azure Databricks account. Replace your source and target Cassandra configurations with the corresponding credentials, and replace the source and target keyspaces and tables. Add more variables for each table as required to the following sample, and then run. After your application starts sending requests to the dual-write proxy, you're ready to migrate historical data.
```scala import com.datastax.spark.connector._
DFfromSourceCassandra
``` > [!NOTE]
-> In the above Scala sample, you will notice that `timestamp` is being set to the current time prior to reading all the data in the source table, and then `writetime` is being set to this backdated timestamp. This is to ensure that records that are written from the historic data load to the target endpoint cannot overwrite updates that come in with a later timestamp from the dual-write proxy while historic data is being read. If for any reason you need to preserve *exact* timestamps, you should take a historic data migration approach which preserves timestamps, such as [this](https://github.com/scylladb/scylla-migrator) sample.
+> In the preceding Scala sample, you'll notice that `timestamp` is being set to the current time before reading all the data in the source table. Then, `writetime` is being set to this backdated time stamp. This ensures that records that are written from the historical data load to the target endpoint can't overwrite updates that come in with a later time stamp from the dual-write proxy while historical data is being read.
+>
+> If you need to preserve *exact* time stamps for any reason, you should take a historical data migration approach that preserves time stamps, such as [this sample](https://github.com/scylladb/scylla-migrator).
-## Validation
+## Validate the source and target
-Once the historic data load is complete, your databases should be in sync and ready for cutover. However, it is recommended that you carry out a validation of source and target to ensure that request results match before finally cutting over.
+After the historical data load is complete, your databases should be in sync and ready for cutover. However, we recommend that you validate the source and target to ensure that request results match before finally cutting over.
## Next steps
marketplace Azure Vm Create Certification Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
Provisioning issues can include the following failure scenarios:
|Scenario|Error|Reason|Solution| ||||| |1|Invalid virtual hard disk (VHD)|If the specified cookie value in the VHD footer is incorrect, the VHD will be considered invalid.|Re-create the image and submit the request.|
-|2|Invalid blob type|VM provisioning failed because the used block is a blob type instead of a page type.|Re-create the image and submit the request.|
+|2|Invalid blob type|VM provisioning failed because the used block is a block type instead of a page type.|Re-create the image as page type and submit the request.|
|3|Provisioning timeout or not properly generalized|There's an issue with VM generalization.|Re-create the image with generalization and submit the request.| |
marketplace Ratings Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/ratings-reviews.md
description: Learn how to access a consolidated view of customer feedback for yo
Previously updated : 06/03/2021 Last updated : 07/06/2021
Reviews are displayed in chronological order for when they were posted. The defa
### Responding to a review
-You can respond to reviews from users and the response will be visible on either Azure Marketplace or AppSource storefronts. To respond to a review, follow these steps:
+You can respond to reviews from users and the response will be visible on either Azure Marketplace or AppSource storefronts. This functionality applies to the following offer types: Azure Application, Azure Container, Azure virtual machine, Dynamics 365 Business Central, Dynamics 365 Customer Engagement & Power Apps, Dynamics 365 Operations, IoT Edge Module, Managed service, Power BI app, and Software as a Service. To respond to a review, follow these steps:
1. Select the **Ratings & reviews** tab, and then select **Azure Marketplace** or **AppSource**. You can select **filters** to narrow down the list of reviews, and display, for example, only reviews with a specific star rating
media-services Encode Basic Encoding Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-basic-encoding-python-quickstart.md
Previously updated : 2/26/2021 Last updated : 7/2/2021
Create a fork and clone the sample located in the [Python samples repository](ht
Get the values from your account to create an *.env* file. That's right, save it with no name, just the extension. Use *sample.env* as a template then save the *.env* file to the BasicEncoder folder in your local clone.
+## Use Python virtual environments
+For samples, we recommend that you always create and activate a Python virtual environment using the following steps:
+
+1. Open the sample folder in VSCode or other editor
+2. Create the virtual environment
+
+ ``` bash
+ # py -3 uses the global python interpreter. You can also use python -m venv .venv.
+ py -3 -m venv .venv
+ ```
+
+ This command runs the Python venv module and creates a virtual environment in a folder named .venv.
+
+3. Activate the virtual environment:
+
+ ``` bash
+ .venv\scripts\activate
+ ```
+
+ A virtual environment is a folder within a project that isolates a copy of a specific Python interpreter. Once you activate that environment (which Visual Studio Code does automatically), running pip install installs a library into that environment only. When you then run your Python code, it runs in the environment's exact context with specific versions of every library. And when you run pip freeze, you get the exact list of the those libraries. (In many of the samples, you create a requirements.txt file for the libraries you need, then use pip install -r requirements.txt. A requirements file is generally needed when you deploy code to Azure.)
+
+## Set up
+
+Set up and [configure your local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment)
+
+Install the azure-identity library for Python. This module is needed for Azure Active Directory authentication. See the details at [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#environment-variables)
+
+ ``` bash
+ pip install azure-identity
+ ```
+
+Install the Python SDK for [Azure Media Services](/python/api/overview/azure/media-services)
+
+The Pypi page for the Media Services Python SDK with latest version details is located at - [azure-mgmt-media](https://pypi.org/project/azure-mgmt-media/)
+
+ ``` bash
+ pip install azure-mgmt-media
+ ```
+
+Install the [Azure Storage SDK for Python](https://pypi.org/project/azure-storage-blob/)
+
+ ``` bash
+ pip install azure-storage-blob
+ ```
+
+You can optionally install ALL of the requirements for a given samples by using the "requirements.txt" file in the samples folder
+
+ ``` bash
+ pip install -r requirements.txt
+ ```
+ ## Try the code The code below is thoroughly commented. Use the whole script or use parts of it for your own script.
In this sample, a random number is generated for naming things so you can identi
We're not using the SAS URL for the input asset in this sample.
-```python
-import adal
-from msrestazure.azure_active_directory import AdalAuthentication
-from msrestazure.azure_cloud import AZURE_PUBLIC_CLOUD
-from azure.mgmt.media import AzureMediaServices
-from azure.mgmt.media.models import (
- Asset,
- Transform,
- TransformOutput,
- BuiltInStandardEncoderPreset,
- Job,
- JobInputAsset,
- JobOutputAsset)
-import os, uuid, sys
-from azure.identity import DefaultAzureCredential
-from azure.storage.blob import BlobServiceClient, BlobClient
-
-#Timer for checking job progress
-import time
-
-#This is only necessary for the random number generation
-import random
-
-# Set and get environment variables
-# Open sample.env, edit the values there and save the file as .env
-# (Not all of the values may be used in this sample code, but the .env file is reusable.)
-# Use config to use the .env file.
-print("Getting .env values")
-client_id = os.getenv('AADCLIENTID','default_val')
-key = os.getenv('AADSECRET','default_val')
-tenant_id = os.getenv('AADTENANTID','default_val')
-tenant_domain = os.getenv('AADTENANTDOMAIN','default_val')
-account_name = os.getenv('ACCOUNTNAME','default_val')
-location = os.getenv('LOCATION','default_val')
-resource_group_name = os.getenv('RESOURCEGROUP','default_val')
-subscription_id = os.getenv('SUBSCRIPTIONID','default_val')
-arm_audience = os.getenv('ARMAADAUDIENCE','default_val')
-arm_endpoint = os.getenv('ARMENDPOINT','default_val')
-
-#### STORAGE ####
-# Values from .env and the blob url
-# For this sample you will use the storage account key to create and access assets
-# The SAS URL is not used here
-storage_account_name = os.getenv('STORAGEACCOUNTNAME','default_val')
-storage_account_key = os.getenv('STORAGEACCOUNTKEY','default_val')
-storage_blob_url = 'https://' + storage_account_name + '.blob.core.windows.net/'
-
-# Active Directory
-LOGIN_ENDPOINT = AZURE_PUBLIC_CLOUD.endpoints.active_directory
-RESOURCE = AZURE_PUBLIC_CLOUD.endpoints.active_directory_resource_id
-
-# Establish credentials
-context = adal.AuthenticationContext(LOGIN_ENDPOINT + '/' + tenant_id)
-credentials = AdalAuthentication(
- context.acquire_token_with_client_credentials,
- RESOURCE,
- client_id,
- key
-)
-
-# The file you want to upload. For this example, put the file in the same folder as this script.
-# The file ignite.mp4 has been provided for you.
-source_file = "ignite.mp4"
-
-# Generate a random number that will be added to the naming of things so that you don't have to keep doing this during testing.
-thisRandom = random.randint(0,9999)
-
-# Set the attributes of the input Asset using the random number
-in_asset_name = 'inputassetName' + str(thisRandom)
-in_alternate_id = 'inputALTid' + str(thisRandom)
-in_description = 'inputdescription' + str(thisRandom)
-# Create an Asset object
-# From the SDK
-# Asset(*, alternate_id: str = None, description: str = None, container: str = None, storage_account_name: str = None, **kwargs) -> None
-# The asset_id will be used for the container parameter for the storage SDK after the asset is created by the AMS client.
-input_asset = Asset(alternate_id=in_alternate_id,description=in_description)
-
-# Set the attributes of the output Asset using the random number
-out_asset_name = 'outputassetName' + str(thisRandom)
-out_alternate_id = 'outputALTid' + str(thisRandom)
-out_description = 'outputdescription' + str(thisRandom)
-# From the SDK
-# Asset(*, alternate_id: str = None, description: str = None, container: str = None, storage_account_name: str = None, **kwargs) -> None
-output_asset = Asset(alternate_id=out_alternate_id,description=out_description)
-
-# The AMS Client
-print("Creating AMS client")
-# From SDK
-# AzureMediaServices(credentials, subscription_id, base_url=None)
-client = AzureMediaServices(credentials, subscription_id)
-
-# Create an input Asset
-print("Creating input asset " + in_asset_name)
-# From SDK
-# create_or_update(resource_group_name, account_name, asset_name, parameters, custom_headers=None, raw=False, **operation_config)
-inputAsset = client.assets.create_or_update(resource_group_name, account_name, in_asset_name, input_asset)
-
-# An AMS asset is a container with a specfic id that has "asset-" prepended to the GUID.
-# So, you need to create the asset id to identify it as the container
-# where Storage is to upload the video (as a block blob)
-in_container = 'asset-' + inputAsset.asset_id
-
-# create an output Asset
-print("Creating output asset " + out_asset_name)
-# From SDK
-# create_or_update(resource_group_name, account_name, asset_name, parameters, custom_headers=None, raw=False, **operation_config)
-outputAsset = client.assets.create_or_update(resource_group_name, account_name, out_asset_name, output_asset)
-
-### Use the Storage SDK to upload the video ###
-print("Uploading the file " + source_file)
-# From SDK
-# BlobServiceClient(account_url, credential=None, **kwargs)
-blob_service_client = BlobServiceClient(account_url=storage_blob_url, credential=storage_account_key)
-# From SDK
-# get_blob_client(container, blob, snapshot=None)
-blob_client = blob_service_client.get_blob_client(in_container,source_file)
-# Upload the video to storage as a block blob
-with open(source_file, "rb") as data:
- # From SDK
- # upload_blob(data, blob_type=<BlobType.BlockBlob: 'BlockBlob'>, length=None, metadata=None, **kwargs)
- blob_client.upload_blob(data, blob_type="BlockBlob")
-
-### Create a Transform ###
-transform_name='MyTrans' + str(thisRandom)
-# From SDK
-# TransformOutput(*, preset, on_error=None, relative_priority=None, **kwargs) -> None
-transform_output = TransformOutput(preset=BuiltInStandardEncoderPreset(preset_name="AdaptiveStreaming"))
-print("Creating transform " + transform_name)
-# From SDK
-# Create_or_update(resource_group_name, account_name, transform_name, outputs, description=None, custom_headers=None, raw=False, **operation_config)
-transform = client.transforms.create_or_update(resource_group_name=resource_group_name,account_name=account_name,transform_name=transform_name,outputs=[transform_output])
-
-### Create a Job ###
-job_name = 'MyJob'+ str(thisRandom)
-print("Creating job " + job_name)
-files = (source_file)
-# From SDK
-# JobInputAsset(*, asset_name: str, label: str = None, files=None, **kwargs) -> None
-input = JobInputAsset(asset_name=in_asset_name)
-# From SDK
-# JobOutputAsset(*, asset_name: str, **kwargs) -> None
-outputs = JobOutputAsset(asset_name=out_asset_name)
-# From SDK
-# Job(*, input, outputs, description: str = None, priority=None, correlation_data=None, **kwargs) -> None
-theJob = Job(input=input,outputs=[outputs])
-# From SDK
-# Create(resource_group_name, account_name, transform_name, job_name, parameters, custom_headers=None, raw=False, **operation_config)
-job: Job = client.jobs.create(resource_group_name,account_name,transform_name,job_name,parameters=theJob)
-
-### Check the progress of the job ###
-# From SDK
-# get(resource_group_name, account_name, transform_name, job_name, custom_headers=None, raw=False, **operation_config)
-job_state = client.jobs.get(resource_group_name,account_name,transform_name,job_name)
-# First check
-print("First job check")
-print(job_state.state)
-
-# Check the state of the job every 10 seconds. Adjust time_in_seconds = <how often you want to check for job state>
-def countdown(t):
- while t:
- mins, secs = divmod(t, 60)
- timer = '{:02d}:{:02d}'.format(mins, secs)
- print(timer, end="\r")
- time.sleep(1)
- t -= 1
- job_state = client.jobs.get(resource_group_name,account_name,transform_name,job_name)
- if(job_state.state != "Finished"):
- print(job_state.state)
- countdown(int(time_in_seconds))
- else:
- print(job_state.state)
-time_in_seconds = 10
-countdown(int(time_in_seconds))
-```
+[!code-python[Main](../../../media-services-v3-python/BasicEncoding/basic-encoding.py)]
## Delete resources
When you're finished with the quickstart, delete the resources created in the re
## Next steps
-Get familiar with the [Media Services Python SDK](/python/api/azure-mgmt-media/)
+Get familiar with the [Media Services Python SDK](/python/api/azure-mgmt-media/)
+
+## Resources
+
+- See the Azure Media Services [management API](/python/api/overview/azure/mediaservices/management).
+- Learn how to use the [Storage APIs with Python](/azure/developer/python/azure-sdk-example-storage-use?tabs=cmd)
+- Learn more about the [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#environment-variables)
+- Learn more about [Azure Media Services v3](/azure/media-services/latest/media-services-overview).
+- Learn about the [Azure Python SDKs](/azure/developer/python)
+- Learn more about [usage patterns for Azure Python SDKs](/azure/developer/python/azure-sdk-library-usage-patterns)
+- Find more Azure Python SDKs in the [Azure Python SDK index](/azure/developer/python/azure-sdk-library-package-index)
+- [Azure Storage Blob Python SDK reference](/python/api/azure-storage-blob/)
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
editor: ''
+ Last updated 03/17/2021
To stay up-to-date with the most recent developments, this article provides you
## June 2021
+### Private links support is now GA
+
+Support for using Media Services with [private links](/azure/private-link/) is now GA and available in all Azure regions including Azure Government clouds.
+Azure Private Link enables you to access Azure PaaS Services and Azure hosted customer-owned/partner services over a Private Endpoint in your virtual network.
+Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet.
+
+For details on how to use Media Services with private links, see [Create a Media Services and Storage account with a Private Link](./security-private-link-how-to.md)
+ ### New US West 3 region is GA The US West 3 region is now GA and available for customers to use when creating new Media Services accounts.
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
You'll find description and links to the samples you may be looking for in each
| [ContentProtection/OfflinePlayReadyAndWidevine](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/OfflinePlayReadyAndWidevine)|This sample demonstrates how to dynamically encrypt your content with PlayReady and Widevine DRM and play the content without requesting a license from license service. It shows how to create a transform with built-in AdaptiveStreaming preset, submit a job, create a ContentKeyPolicy with open restriction and PlayReady/Widevine persistent configuration, associate the ContentKeyPolicy with a StreamingLocator and print a url for playback.| | [Streaming/AssetFilters](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Streaming/AssetFilters)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, create an asset-filter and an account-filter, associate the filters to streaming locators and print urls for playback.| | [Streaming/StreamHLSAndDASH](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Streaming/StreamHLSAndDASH)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, publish output asset for HLS and DASH streaming.|
-| [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/HighAvailabilityEncodingStreaming/) | This sample provides guidance and best practices for a production system using on-demand encoding or analytics. Readers should start with the companion article [High Availability with Media Services and VOD](media-services-high-availability-encoding.md). There is a separate solution file provided for the [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/HighAvailabilityEncodingStreaming/Readme.md) sample. |
+| [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/HighAvailabilityEncodingStreaming/) | This sample provides guidance and best practices for a production system using on-demand encoding or analytics. Readers should start with the companion article [High Availability with Media Services and VOD](architecture-high-availability-encoding-concept.md). There is a separate solution file provided for the [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/HighAvailabilityEncodingStreaming/README.md) sample. |
## [Node.JS](#tab/node/)
media-services Use Intel Grpc Vas Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/use-intel-grpc-vas-tutorial.md
In the initial release of this inference server, you have access to the followin
- object_tracking for person_vehicle_bike_tracking ![object tracking for person vehicle](./media/use-intel-openvino-tutorial/object-tracking.png)
-It uses Pre-loaded Object Detection, Object Classification and Object Tracking pipelines to get started quickly. In addition it comes with pre-loaded [person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-vehicle-bike-detection-crossroad-0078/description/person-vehicle-bike-detection-crossroad-0078.md) and [vehicle-attributes-recognition-barrier-0039 models](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/vehicle-attributes-recognition-barrier-0039/description/vehicle-attributes-recognition-barrier-0039.md).
+It uses Pre-loaded Object Detection, Object Classification and Object Tracking pipelines to get started quickly. In addition it comes with pre-loaded [person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-vehicle-bike-detection-crossroad-0078/README.md) and [vehicle-attributes-recognition-barrier-0039 models](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/vehicle-attributes-recognition-barrier-0039/README.md).
> [!NOTE] > By downloading and using the Edge module: OpenVINOΓäó DL Streamer ΓÇô Edge AI Extension from Intel, and the included software, you agree to the terms and conditions under the [License Agreement](https://www.intel.com/content/www/us/en/legal/terms-of-use.html).
media-services Media Services Specifications Live Timed Metadata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/media-services-specifications-live-timed-metadata.md
The following documents contain provisions, which, through reference in this tex
| [RFC8216] | R. Pantos, Ed.; W. May. HTTP Live Streaming. August 2017. Informational. [https://tools.ietf.org/html/rfc8216](https://tools.ietf.org/html/rfc8216) | | [RFC4648] | The Base16, Base32, and Base64 Data Encodings - [https://tools.ietf.org/html/rfc4648](https://tools.ietf.org/html/rfc4648) | | [RTMP] | ["Adobe's Real-Time Messaging Protocol", December 21, 2012](https://www.adobe.com/devnet/rtmp.html) |
-| [SCTE-35-2019] | SCTE 35: 2019 - Digital Program Insertion Cueing Message for Cable - https://www.scte.org/SCTEDocs/Standards/ANSI_SCTE%2035%202019r1.pdf |
+| [SCTE-35-2019] | SCTE 35: 2019 - Digital Program Insertion Cueing Message for Cable - https://scte-cms-resource-storage.s3.amazonaws.com/ANSI_SCTE-35-2019a-1582645390859.pdf |
| [SCTE-214-1] | SCTE 214-1 2016 ΓÇô MPEG DASH for IP-Based Cable Services Part 1: MPD Constraints and Extensions | | [SCTE-214-3] | SCTE 214-3 2015 MPEG DASH for IP-Based Cable Services Part 3: DASH/FF Profile | | [SCTE-224] | SCTE 224 2018r1 ΓÇô Event Scheduling and Notification Interface |
The "legacy" EXT-X-CUE tag is defined as below and also can be normative referen
| | -- | -- | - | | CUE | quoted string | Required | The message encoded as a base64-encoded string as described in [RFC4648]. For [SCTE-35] messages, this is the base64-encoded splice_info_section(). | | TYPE | quoted string | Required | A URN or URL identifying the message scheme. For [SCTE-35] messages, the type takes the special value "scte35". |
-| ID | quoted string | Required | A unique identifier for the event. If the ID is not specified when the message is ingested, Azure Media Services will generate a unique id. |
+| ID | quoted string | Required | A unique identifier for the event. If the ID is not specified when the message is ingested, Azure Media Services will generate a unique ID. |
| DURATION | decimal floating point number | Required | The duration of the event. If unknown, the value **SHOULD** be 0. Units are factional seconds. | | ELAPSED | decimal floating point number | Optional, but Required for sliding window | When the signal is being repeated to support a sliding presentation window, this field **MUST** be the amount of presentation time that has elapsed since the event began. Units are fractional seconds. This value may exceed the original specified duration of the splice or segment. | | TIME | decimal floating point number | Required | The presentation time of the event. Units are fractional seconds. |
The following details outline the specific values the client should expect in th
| Timescale | 32-bit unsigned integer | Required | The timescale, in ticks per second, of the times and duration fields within the 'emsg' box. | | Presentation_time_delta | 32-bit unsigned integer | Required | The media presentation time delta of the presentation time of the event and the earliest presentation time in this segment. The presentation time and duration **SHOULD** align with Stream Access Points (SAP) of type 1 or 2, as defined in [ISO-14496-12] Annex I. | | event_duration | 32-bit unsigned integer | Required | The duration of the event, or 0xFFFFFFFF to indicate an unknown duration. |
-| Id | 32-bit unsigned integer | Required | Identifies this instance of the message. Messages with equivalent semantics shall have the same value. If the ID is not specified when the message is ingested, Azure Media Services will generate a unique id. |
+| Id | 32-bit unsigned integer | Required | Identifies this instance of the message. Messages with equivalent semantics shall have the same value. If the ID is not specified when the message is ingested, Azure Media Services will generate a unique ID. |
| Message_data | byte array | Required | The event message. For [SCTE-35] messages, the message data is the binary splice_info_section() in compliance with [SCTE-214-3] |
mysql Partners Migration Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/partners-migration-mysql.md
To broadly support your Azure Database for MySQL solution, you can choose from a
| Partner | Description | Links | Videos | ||-|-|--| | ![SNP Technologies][1] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
-| ![DB Best Technologies, LLC][2] |**DB Best Technologies, LLC**<br>DB Best helps customers get the most out of a managed Azure database service. The company offers several ways for you to get started, including [Future-State Architectural Design](https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.dbbest.com%2Fservices%2Ffuture-state-architectural-design%2F&data=02%7C01%7Cjtoland%40microsoft.com%7C7311aa2024894a80eff208d5cfd45696%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636643433261194557&sdata=SCr3kseFvcU7mI1%2FZt7K2elXAqLY%2FyL6AO944QiWoLg%3D&reserved=0), [Data Management Optimization for Microsoft Data Platform](https://www.dbbest.com/services/data-management-optimization-for-microsoft-data-platform/), [Microsoft Azure Deployment Planning Services](https://www.dbbest.com/services/microsoft-azure-deployment-planning-services), and [Azure Data Platform Readiness Training](https://www.dbbest.com/services/azure-data-platform-readiness-training).|[Website][dbbest_website]<br>[Twitter][dbbest_twitter]<br>[YouTube][dbbest_youtube]<br>[Contact][dbbest_contact] | |
| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | | | ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | | | ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Website links --> [snp_website]:https://www.snp.com//
-[dbbest_website]:https://www.dbbest.com/technologies/azure-database-service-mysql-postgresql//
[pragmatic-works_website]:https://pragmaticworks.com// [infosys_website]:https://www.infosys.com/ [credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Press links--> <!--YouTube links-->
-[dbbest_youtube]:https://www.youtube.com/user/DBBestTech
[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks [infosys_youtube]:https://www.youtube.com/user/Infosys [credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A <!--Twitter links--> [snp_twitter]:https://twitter.com/snptechnologies
-[dbbest_twitter]:https://twitter.com/dbbest_tech
[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks [infosys_twitter]:https://twitter.com/infosys [credative_twitter]:https://twitter.com/credativ
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Contact links--> [snp_contact]:mailto:sachin@snp.com
-[dbbest_contact]:mailto:dmitry@dbbest.com
[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com [infosys_contact]:https://www.infosys.com/contact/ [credativ_contact]:mailto:info@credativ.com
network-watcher Connection Monitor Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-schema.md
+
+ Title: Azure Network Watcher Connection Monitor schema | Microsoft Docs
+description: Understand schema of Azure Network Watcher Connection Monitor.
+
+documentationcenter: na
++
+editor:
++
+ms.devlang: na
+
+ na
+ Last updated : 07/05/2021+++
+# Azure Network Watcher Connection Monitor schema
+
+Connection Monitor provides unified end-to-end connection monitoring in Azure Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud deployments. Network Watcher provides tools to monitor, diagnose, and view connectivity-related metrics for your Azure deployments.
+
+Here are some use cases for Connection Monitor:
+
+- Your front-end web server VM communicates with a database server VM in a multi-tier application. You want to check network connectivity between the two VMs.
+- You want VMs in the East US region to ping VMs in the Central US region, and you want to compare cross-region network latencies.
+- You have multiple on-premises office sites in Seattle, Washington, and in Ashburn, Virginia. Your office sites connect to Microsoft 365 URLs. For your users of Microsoft 365 URLs, compare the latencies between Seattle and Ashburn.
+- Your hybrid application needs connectivity to an Azure Storage endpoint. Your on-premises site and your Azure application connect to the same Azure Storage endpoint. You want to compare the latencies of the on-premises site to the latencies of the Azure application.
+- You want to check the connectivity between your on-premises setups and the Azure VMs that host your cloud application.
+
+Here are some benefits of Connection Monitor:
+
+* Unified, intuitive experience for Azure and hybrid monitoring needs
+* Cross-region, cross-workspace connectivity monitoring
+* Higher probing frequencies and better visibility into network performance
+* Faster alerting for your hybrid deployments
+* Support for connectivity checks that are based on HTTP, TCP, and ICMP
+* Metrics and Log Analytics support for both Azure and non-Azure test setups
+
+## Connection Monitor Tests schema
+
+Listed below are the fields in the Connection Monitor Tests schema and what they signify
+
+| Field | Description |
+|||
+| TimeGenerated | The timestamp (UTC) of when the log was generated |
+| RecordId | The record ID for unique identification of test result record |
+| ConnectionMonitorResourceId | The connection monitor resource ID of the test |
+| TestGroupName | The test group name to which the test belongs to |
+| TestConfigurationName | The test configuration name to which the test belongs to |
+| SourceType | The type of the source machine configured for the test |
+| SourceResourceId | The resource ID of the source machine |
+| SourceAddress | The address of the source configured for the test |
+| SourceSubnet | The subnet of the source |
+| SourceIP | The IP address of the source |
+| SourceName | The source end point name |
+| SourceAgentId | The source agent ID |
+| DestinationPort | The destination port configured for the test |
+| DestinationType | The type of the destination machine configured for the test |
+| DestinationResourceId | The resource ID of the Destination machine |
+| DestinationAddress | The address of the destination configured for the test |
+| DestinationSubnet | If applicable, the subnet of the destination |
+| DestinationIP | The IP address of the destination |
+| DestinationName | The destination end point name |
+| DestinationAgentId | The destination agent ID |
+| Protocol | The protocol of the test |
+| ChecksTotal | The total number of checks done under the test |
+| ChecksFailed | The total number of checks failed under the test |
+| TestResult | The result of the test |
+| TestResultCriterion | The result criterion of the test |
+| ChecksFailedPercentThreshold | The checks failed percent threshold set for the test |
+| RoundTripTimeMsThreshold | The round trip threshold (ms) set for the test |
+| MinRoundTripTimeMs | The minimum round trip time (ms) for the test |
+| MaxRoundTripTimeMs | The maximum round trip time for the test |
+| AvgRoundTripTimeMs | The average round trip time for the test |
+| JitterMs | The mean deviation round trip time for the test |
+| AdditionalData | The additional data for the test |
++
+## Connection Monitor Path schema
+
+Listed below are the fields in the Connection Monitor Path schema and what they signify
+
+| Field | Description |
+|||
+| TimeGenerated | The timestamp (UTC) of when the log was generated |
+| RecordId | The record ID for unique identification of test result record |
+| TopologyId | The topology ID of the path record |
+| ConnectionMonitorResourceId | The connection monitor resource ID of the test |
+| TestGroupName | The test group name to which the test belongs to |
+| TestConfigurationName | The test configuration name to which the test belongs to |
+| SourceType | The type of the source machine configured for the test |
+| SourceResourceId | The resource ID of the source machine |
+| SourceAddress | The address of the source configured for the test |
+| SourceSubnet | The subnet of the source |
+| SourceIP | The IP address of the source |
+| SourceName | The source end point name |
+| SourceAgentId | The source agent ID |
+| DestinationPort | The destination port configured for the test |
+| DestinationType | The type of the destination machine configured for the test |
+| DestinationResourceId | The resource ID of the Destination machine |
+| DestinationAddress | The address of the destination configured for the test |
+| DestinationSubnet | If applicable, the subnet of the destination |
+| DestinationIP | The IP address of the destination |
+| DestinationName | The destination end point name |
+| DestinationAgentId | The destination agent ID |
+| Protocol | The protocol of the test |
+| ChecksTotal | The total number of checks done under the test |
+| ChecksFailed | The total number of checks failed under the test |
+| PathTestResult | The result of the test |
+| PathResultCriterion | The result criterion of the test |
+| ChecksFailedPercentThreshold | The checks failed percent threshold set for the test |
+| RoundTripTimeMsThreshold | The round trip threshold (ms) set for the test |
+| MinRoundTripTimeMs | The minimum round trip time (ms) for the test |
+| MaxRoundTripTimeMs | The maximum round trip time for the test |
+| AvgRoundTripTimeMs | The average round trip time for the test |
+| JitterMs | The mean deviation round trip time for the test |
+| HopAddresses | The hop addresses identified for the test |
+| HopTypes | The hop types identified for the test |
+| HopLinkTypes | The hop link types identified for the test |
+| HopResourceIds | The hop resource IDs identified for the test |
+| Issues | The issues identified for the test |
+| Hops | The hops identified for the test |
+| AdditionalData | The additional data for the test |
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/partners-migration-postgresql.md
To broadly support your Azure Database for PostgreSQL solution, you can choose f
| Partner | Description | Links | Videos | | | | | | | ![SNP Technologies][1] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
-| ![DB Best Technologies, LLC][2] |**DB Best Technologies, LLC**<br>DB Best helps customers get the most out of a managed Azure database service. The company offers several ways for you to get started, including [Future-State Architectural Design](https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.dbbest.com%2Fservices%2Ffuture-state-architectural-design%2F&data=02%7C01%7Cjtoland%40microsoft.com%7C7311aa2024894a80eff208d5cfd45696%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636643433261194557&sdata=SCr3kseFvcU7mI1%2FZt7K2elXAqLY%2FyL6AO944QiWoLg%3D&reserved=0), [Data Management Optimization for Microsoft Data Platform](https://www.dbbest.com/services/data-management-optimization-for-microsoft-data-platform/), [Microsoft Azure Deployment Planning Services](https://www.dbbest.com/services/microsoft-azure-deployment-planning-services), and [Azure Data Platform Readiness Training](https://www.dbbest.com/services/azure-data-platform-readiness-training).|[Website][dbbest_website]<br>[Twitter][dbbest_twitter]<br>[YouTube][dbbest_youtube]<br>[Contact][dbbest_contact] | |
| ![Pragmatic Works][3] |**Pragmatic Works**<br>Pragmatic Works is a training and consulting company with deep expertise in data management and performance, Business Intelligence, Big Data, Power BI, and Azure. They focus on data optimization and improving the efficiency of SQL Server and cloud management.|[Website][pragmatic-works_website]<br>[Twitter][pragmatic-works_twitter]<br>[YouTube][pragmatic-works_youtube]<br>[Contact][pragmatic-works_contact] | | | ![Infosys][4] |**Infosys**<br>Infosys is a global leader in the latest digital services and consulting. With over three decades of experience managing the systems of global enterprises, Infosys expertly steers clients through their digital journey by enabling organizations with an AI-powered core. Doing so helps prioritize the execution of change. Infosys also provides businesses with agile digital at scale to deliver unprecedented levels of performance and customer delight.|[Website][infosys_website]<br>[Twitter][infosys_twitter]<br>[YouTube][infosys_youtube]<br>[Contact][infosys_contact] | | | ![credativ][5] |**credativ**<br>credativ is an independent consulting and services company. Since 1999, they have offered comprehensive services and technical support for the implementation and operation of Open Source software in business applications. Their comprehensive range of services includes strategic consulting, sound technical advice, qualified training, and personalized support up to 24 hours per day for all your IT needs.|[Marketplace][credativ_marketplace]<br>[Website][credativ_website]<br>[Twitter][credative_twitter]<br>[YouTube][credativ_youtube]<br>[Contact][credativ_contact] | |
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Website links --> [snp_website]:https://www.snp.com//
-[dbbest_website]:https://www.dbbest.com/technologies/azure-database-service-mysql-postgresql//
[pragmatic-works_website]:https://pragmaticworks.com// [infosys_website]:https://www.infosys.com/ [credativ_website]:https://www.credativ.com/postgresql-competence-center/microsoft-azure
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Press links--> <!--YouTube links-->
-[dbbest_youtube]:https://www.youtube.com/user/DBBestTech
[pragmatic-works_youtube]:https://www.youtube.com/user/PragmaticWorks [infosys_youtube]:https://www.youtube.com/user/Infosys [credativ_youtube]:https://www.youtube.com/channel/UCnSnr6_TcILUQQvAwlYFc8A <!--Twitter links--> [snp_twitter]:https://twitter.com/snptechnologies
-[dbbest_twitter]:https://twitter.com/dbbest_tech
[pragmatic-works_twitter]:https://twitter.com/PragmaticWorks [infosys_twitter]:https://twitter.com/infosys [credative_twitter]:https://twitter.com/credativ
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Contact links--> [snp_contact]:mailto:sachin@snp.com
-[dbbest_contact]:mailto:dmitry@dbbest.com
[pragmatic-works_contact]:mailto:marketing@pragmaticworks.com [infosys_contact]:https://www.infosys.com/contact/ [credativ_contact]:mailto:info@credativ.com
purview Concept Resource Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-resource-sets.md
Display name: "data"
## Customizing resource set grouping using pattern rules
-hen scanning a storage account, Azure Purview uses a set of defined patterns to determine if a group of assets is a resource set. In some cases, Azure Purview's resource set grouping may not accurately reflect your data estate. These issues can include:
+When scanning a storage account, Azure Purview uses a set of defined patterns to determine if a group of assets is a resource set. In some cases, Azure Purview's resource set grouping may not accurately reflect your data estate. These issues can include:
- Incorrectly marking an asset as a resource set - Putting an asset into the wrong resource set
security-center Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-container-registries-introduction.md
Title: Azure Defender for container registries - the benefits and features
description: Learn about the benefits and features of Azure Defender for container registries. Previously updated : 04/07/2021 Last updated : 07/05/2021
Azure Container Registry (ACR) is a managed, private Docker registry service that stores and manages your container images for Azure deployments in a central registry. It's based on the open-source Docker Registry 2.0.
-To protect all the Azure Resource Manager based registries in your subscription, enable **Azure Defender for container registries** at the subscription level. Security Center will then scan images that are pushed to the registry, imported into the registry, or any images pulled within the last 30 days. This feature is charged per image.
+To protect the Azure Resource Manager based registries in your subscription, enable **Azure Defender for container registries** at the subscription level. Azure Defender will then scan all images when theyΓÇÖre pushed to the registry, imported into the registry, or pulled within the last 30 days. YouΓÇÖll be charged for every image that gets scanned ΓÇô once per image.
[!INCLUDE [Defender for container registries availability info](../../includes/security-center-availability-defender-for-container-registries.md)]
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
The extension can also protect Kubernetes clusters on other cloud providers, alt
| Release state | **Preview**<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)]| | Required roles and permissions | [Security admin](../role-based-access-control/built-in-roles.md#security-admin) can dismiss alerts<br>[Security reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings | | Pricing | Requires [Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md) |
-| Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
+| Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Limitations | Azure Arc enabled Kubernetes and the Azure Defender extension **don't support** managed Kubernetes offerings like Google Kubernetes Engine and Elastic Kubernetes Service. [Azure Defender is natively available for Azure Kubernetes Service (AKS)](defender-for-kubernetes-introduction.md) and doesn't require connecting the cluster to Azure Arc. | | Environments and regions | Availability for this extension is the same as [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md)|
security-center Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-resource-manager-introduction.md
Azure Defender for Resource Manager automatically monitors the resource manageme
|-|:-| |Release state:|General Availability (GA)| |Pricing:|**Azure Defender for Resource Manager** is billed as shown on [Security Center pricing](https://azure.microsoft.com/pricing/details/security-center/)|
-|Clouds:|![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![Yes](./media/icons/yes-icon.png) US Gov, Other Gov<br>![No](./media/icons/no-icon.png) Azure China|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: US Gov<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China (Preview)|
||| ## What are the benefits of Azure Defender for Resource Manager?
security-center Security Center Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
For information about when recommendations are generated for each of these prote
| - [Azure Defender for SQL servers on machines](/azure/security-center/defender-for-sql-introduction) | GA | GA | Not Available | | - [Azure Defender for open-source relational databases](/azure/security-center/defender-for-databases-introduction) | GA | Not Available | Not Available | | - [Azure Defender for Key Vault](/azure/security-center/defender-for-key-vault-introduction) | GA | Not Available | Not Available |
-| - [Azure Defender for Resource Manager](/azure/security-center/defender-for-resource-manager-introduction) | GA | Public Preview | Not Available |
+| - [Azure Defender for Resource Manager](/azure/security-center/defender-for-resource-manager-introduction) | GA | Public Preview | Public Preview|
| - [Azure Defender for Storage](/azure/security-center/defender-for-storage-introduction) <sup>[6](#footnote6)</sup> | GA | GA | Not Available |
-| - [Threat protection for Cosmos DB](/azure/security-center/other-threat-protections#threat-protection-for-azure-cosmos-db-preview) | Public Preview | Not Available | Not Available |
+| - [Threat protection for Cosmos DB](/azure/security-center/other-threat-protections#threat-protection-for-azure-cosmos-db-preview) | Public Preview | Not Available | Not Available |
| - [Kubernetes workload protection](/azure/security-center/kubernetes-workload-protections) | GA | GA | GA | | **Azure Defender for servers features** <sup>[7](#footnote7)</sup> | | | | | - [Just-in-time VM access](/azure/security-center/security-center-just-in-time) | GA | GA | GA |
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-solution.md
Previously updated : 06/09/2021 Last updated : 07/06/2021
To run the SAP data connector deployment script, you'll need the following detai
- The SAP user you created in [Configure your SAP system](#configure-your-sap-system), with the **/MSFTSEN/SENTINEL_CONNECTOR** role applied. - The help of your SAP team. - **To run the SAP solution deployment script**: 1. Run the following command to deploy the SAP solution on your VM:
If you have a Docker container already running with an earlier version of the SA
The SAP data connector Docker container on your machine is updated.
+## Collect SAP HANA audit logs
+
+If you have SAP HANA database audit logs configured with Syslog, you'll need also need to configure your Log Analytics agent to collect the Syslog files.
+
+1. Make sure that the SAP HANA audit log trail is configured to use Syslog as described in *SAP Note 0002624117*, accessible from the [SAP Launchpad support site](https://launchpad.support.sap.com/#/notes/0002624117). For more information, see:
+
+ - [SAP HANA Audit Trail - Best Practice](https://archive.sap.com/documents/docs/DOC-51098)
+ - [Recommendations for Auditing](https://help.sap.com/viewer/742945a940f240f4a2a0e39f93d3e2d4/2.0.05/en-US/5c34ecd355e44aa9af3b3e6de4bbf5c1.html)
+
+1. Check your operating system Syslog files for any relevant HANA database events.
+
+1. Install and configure a Log Analytics agent on your machine:
+
+ 1. Sign in to your HANA database operating system as a user with sudo privileges.
+ 1. In the Azure portal, go to your Log Analytics workspace. On the left, under **Settings**, select **Agents management > Linux servers**.
+ 1. Copy the code shown in the box under **Download and onboard agent for Linux** to your terminal and run the script.
+
+ The Log Analytics agent is installed on your machine and connected to your workspace. For more information, see [Install Log Analytics agent on Linux computers
+](/azure/azure-monitor/agents/agent-linux) and [OMS Agent for Linux](https://github.com/microsoft/OMS-Agent-for-Linux) on the Microsoft GitHub repository.
+
+1. Refresh the **Agents Management > Linux servers** tab to see that you have **1 Linux computers connected**.
+
+1. Under **Settings** on the left, select **Agents configuration** and select the **Syslog** tab.
+
+1. Select **Add facility** to add the facilities you want to collect.
+
+ > [!TIP]
+ > Since the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities, check them against your Syslog logs, and then remove any that aren't relevant.
+ >
+
+1. In Azure Sentinel, check to see that HANA database events are now shown in the ingested logs.
+ ## Next steps Learn more about the Azure Sentinel SAP solutions:
service-bus-messaging Service Bus Filter Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-filter-examples.md
DateTimeMp2-DateTimeMp1 <= @timespan
## Using IN and NOT IN ```csharp
-StoreId IN('Store1', 'Store2', 'Store3')"
+StoreId IN('Store1', 'Store2', 'Store3')
sys.To IN ('Store5','Store6','Store7') OR StoreId = 'Store8'
See the following samples:
- [.NET - Basic send and receive tutorial with filters](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/GettingStarted/BasicSendReceiveTutorialwithFilters/BasicSendReceiveTutorialWithFilters) - [.NET - Topic filters](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/TopicFilters)-- [Azure Resource Manager template](/azure/templates/microsoft.servicebus/2017-04-01/namespaces/topics/subscriptions/rules)
+- [Azure Resource Manager template](/azure/templates/microsoft.servicebus/2017-04-01/namespaces/topics/subscriptions/rules)
service-bus-messaging Service Bus Php How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-php-how-to-use-queues.md
Last updated 06/23/2020
In this tutorial, you learn how to create PHP applications to send messages to and receive messages from a Service Bus queue.
+> [!IMPORTANT]
+> As of February 2021, the Azure SDK for PHP has entered a retirement phase and is no longer officially supported by Microsoft. For more information, see [this Announcement](https://github.com/Azure/azure-sdk-for-php#important-annoucement) on GitHub. This article will be retired soon.
+ ## Prerequisites 1. An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF). 2. If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue.
service-bus-messaging Service Bus Php How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-php-how-to-use-topics-subscriptions.md
Last updated 06/23/2020
# Quickstart: How to use Service Bus topics and subscriptions with PHP - This article shows you how to use Service Bus topics and subscriptions. The samples are written in PHP and use the [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php). The scenarios covered include: - Creating topics and subscriptions
This article shows you how to use Service Bus topics and subscriptions. The samp
- Receiving messages from a subscription - Deleting topics and subscriptions
+> [!IMPORTANT]
+> As of February 2021, the Azure SDK for PHP has entered a retirement phase and is no longer officially supported by Microsoft. For more information, see [this Announcement](https://github.com/Azure/azure-sdk-for-php#important-annoucement) on GitHub. This article will be retired soon.
+
+ ## Prerequisites 1. An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF). 2. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md) to create a Service Bus **namespace** and get the **connection string**.
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
This article describes how to replicate, failover, and failback Azure virtual ma
>[!NOTE] >
->- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, Japan East, Australia East, UK South, West Europe, North Europe, France Central, Central US, East US, East US 2, West US 2, and West US 3.
+>- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, Japan East, Australia East, UK South, West Europe, North Europe, France Central, Central US, South Central US, East US, East US 2, West US 2, and West US 3.
>- Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. Site Recovery service contributes to your business continuity and disaster recovery strategy by keeping your business apps up and running, during planned and unplanned outages. It is the recommended Disaster Recovery option to keep your applications up and running if there are regional outages.
spring-cloud Expose Apps Gateway Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/expose-apps-gateway-azure-firewall.md
This document explains how to expose applications to the Internet using Applicat
## Define variables
-Define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Cloud in Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md). Customize the values based on your real environment.
+Define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Cloud in Azure virtual network (VNet injection)](./how-to-deploy-in-azure-virtual-network.md). Customize the values based on your real environment. When you define SPRING_APP_PRIVATE_FQDN, remove 'https' from the uri.
``` SUBSCRIPTION='subscription-id'
spring-cloud How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-prepare-app-deployment.md
description: Learn how to prepare an application for deployment to Azure Spring
Previously updated : 09/08/2020 Last updated : 07/06/2021 zone_pivot_groups: programming-languages-spring-cloud
Spring Boot version | Spring Cloud version
| 2.2 | Hoxton.SR8+ 2.3 | Hoxton.SR8+
-2.4.1+ | 2020.0.0
+2.4.1+ | 2020.0.1+
> [!NOTE]
-> We've identified an issue with Spring Boot 2.4.0 on TLS authentication between your apps and Eureka, please use 2.4.1 or above. Please refer to our [FAQ](./faq.md?pivots=programming-language-java#development) for the workaround if you insist on using 2.4.0.
+> - Please upgrade Spring Boot to 2.5.2 or 2.4.8 to address the following CVE report [CVE-2021-22119: Denial-of-Service attack with spring-security-oauth2-client](https://tanzu.vmware.com/security/cve-2021-22119). If you are using Spring Security, please upgrade it to 5.5.1, 5.4.7, 5.3.10 or 5.2.11.
+> - An issue was identified with Spring Boot 2.4.0 on TLS authentication between apps and Spring Cloud Service Registry, please use 2.4.1 or above. Please refer to [FAQ](./faq.md?pivots=programming-language-java#development) for the workaround if you insist on using 2.4.0.
### Dependencies for Spring Boot version 2.2/2.3
For Spring Boot version 2.2 add the following dependencies to the application PO
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.4.1.RELEASE</version>
+ <version>2.4.8</version>
</parent> <!-- Spring Cloud dependencies -->
For Spring Boot version 2.2 add the following dependencies to the application PO
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId>
- <version>2020.0.0</version>
+ <version>2020.0.2</version>
<type>pom</type> <scope>import</scope> </dependency>
To enable Distributed Configuration, include the following `spring-cloud-config-
<groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-client</artifactId> </dependency>
+<dependency>
+ <groupId>org.springframework.cloud</groupId>
+ <artifactId>spring-cloud-starter-bootstrap</artifactId>
+</dependency>
``` > [!WARNING]
storage Data Lake Storage Supported Blob Storage Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
The following table shows how each Blob storage feature is supported with Data L
|Blob storage PowerShell commands|Generally available|Generally available|[Quickstart: Upload, download, and list blobs with PowerShell](storage-quickstart-blobs-powershell.md)| |Blob storage Azure CLI commands|Generally available|Generally available|[Quickstart: Create, download, and list blobs with Azure CLI](storage-quickstart-blobs-cli.md)| |Blob storage APIs|Generally available|Generally available|[Quickstart: Azure Blob storage client library v12 for .NET](storage-quickstart-blobs-dotnet.md)<br>[Quickstart: Manage blobs with Java v12 SDK](storage-quickstart-blobs-java.md)<br>[Quickstart: Manage blobs with Python v12 SDK](storage-quickstart-blobs-python.md)<br>[Quickstart: Manage blobs with JavaScript v12 SDK in Node.js](storage-quickstart-blobs-nodejs.md)|
+|Customer-managed keys|Generally available|Generally available|[Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json)|
|Diagnostic logs|Generally available|Preview |[Azure Storage analytics logging](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| |Archive Access Tier|Generally available|Not supported|[Azure Blob storage: hot, cool, and archive access tiers](storage-blob-storage-tiers.md)| |Lifecycle management policies (tiering)|Generally available|Not yet supported|[Manage the Azure Blob storage lifecycle](storage-lifecycle-management-concepts.md)|
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-overview.md
Previously updated : 03/03/2021 Last updated : 07/06/2021
Point-in-time restore for block blobs has the following limitations and known is
- Only block blobs in a standard general-purpose v2 storage account can be restored as part of a point-in-time restore operation. Append blobs, page blobs, and premium block blobs are not restored. - If you have deleted a container during the retention period, that container will not be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. To learn about protecting containers from deletion, see [Soft delete for containers (preview)](soft-delete-container-overview.md). - If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier is not supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob is not restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](storage-blob-rehydration.md).-- If an immutable storage policy is set and blobs are protected by policy, a restore can be submitted but, any immutable blobs will not be modified. A restore in this case will not yield a consistent state to the restore date and time given.
+- If an immutability policy is configured, then a restore operation can be initiated, but any blobs that are protected by the immutability policy will not be modified. A restore operation in this case will not result in the restoration of a consistent state to the date and time given.
- A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), is not part of a blob and so is not restored as part of a restore operation. - A blob with an active lease cannot be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail atomically. Break any active leases prior to initiating the restore operation. - Snapshots are not created or deleted as part of a restore operation. Only the base blob is restored to its previous state.
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-how-to-mount-container-linux.md
Previously updated : 2/1/2019 Last updated : 07/06/2021
This guide shows you how to use blobfuse, and mount a Blob storage container on
## Install blobfuse on Linux Blobfuse binaries are available on [the Microsoft software repositories for Linux](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software) for Ubuntu, Debian, SUSE, CentoOS, Oracle Linux and RHEL distributions. To install blobfuse on those distributions, configure one of the repositories from the list. You can also build the binaries from source code following the [Azure Storage installation steps](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation#option-2build-from-source) if there are no binaries available for your distribution.
-Blobfuse supports installation on Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, OracleLinux 8.1 . Run this command to make sure that you have one of those versions deployed:
+Blobfuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, OracleLinux 8.1 . Run this command to make sure that you have one of those versions deployed:
``` lsb_release -a ```
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-recover.md
Previously updated : 12/11/2020 Last updated : 07/06/2021
A deleted storage account may be recovered in some cases from within the Azure p
- The storage account was deleted within the past 14 days. - The storage account was created with the Azure Resource Manager deployment model. - A new storage account with the same name has not been created since the original account was deleted.
+- The user who is recovering the storage account must be assigned an Azure RBAC role that provides the **Microsoft.Storage/storageAccounts/write** permission. For information about built-in Azure RBAC roles that provide this permission, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
Before you attempt to recover a deleted storage account, make sure that the resource group for that account exists. If the resource group was deleted, you must recreate it. Recovering a resource group is not possible. For more information, see [Manage resource groups](../../azure-resource-manager/management/manage-resource-groups-portal.md).
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-blobs-download.md
Title: Download blobs from Azure Blob storage by using AzCopy v10 | Microsoft Docs
-description: This article contains a collection of AzCopy example commands that help you download blobs from Azure Blob storage.
+ Title: Download blobs from Azure Blob Storage by using AzCopy v10 | Microsoft Docs
+description: This article contains a collection of AzCopy example commands that help you download blobs from Azure Blob Storage.
-# Download blobs from Azure Blob storage by using AzCopy
+# Download blobs from Azure Blob Storage by using AzCopy
You can download blobs and directories from Blob storage by using the AzCopy v10 command-line utility.
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-troubleshooting-files-performance.md
description: Troubleshoot known performance issues with Azure file shares. Disco
Previously updated : 11/16/2020 Last updated : 07/06/2021 #Customer intent: As a < type of user >, I want < what? > so that < why? >.
To confirm whether your share is being throttled, you can access and use Azure m
> [!NOTE] > To receive an alert, see the ["How to create an alert if a file share is throttled"](#how-to-create-an-alert-if-a-file-share-is-throttled) section later in this article.
-### Solution
+#### Solution
- If you're using a standard file share, [enable large file shares](storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) on your storage account and [increase the size of file share quota to take advantage of the large file share support](storage-how-to-create-file-share.md#expand-existing-file-shares). Large file shares support great IOPS and bandwidth limits; see [Azure Files scalability and performance targets](storage-files-scale-targets.md) for details. - If you're using a premium file share, increase the provisioned file share size to increase the IOPS limit. To learn more, see the [Understanding provisioning for premium file shares](./understanding-billing.md#provisioned-model).
To determine whether most of your requests are metadata-centric, start by follow
![Screenshot of the metrics options for premium file shares, showing an "API name" property filter.](media/storage-troubleshooting-premium-fileshares/MetadataMetrics.png)
-### Workaround
+#### Workaround
- Check to see whether the application can be modified to reduce the number of metadata operations. - Add a virtual hard disk (VHD) on the file share and mount the VHD over SMB from the client to perform file operations against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata operations to be local. The setup offers performance similar to that of a local directly attached storage.
To determine whether most of your requests are metadata-centric, start by follow
If the application that you're using is single-threaded, this setup can result in significantly lower IOPS throughput than the maximum possible throughput, depending on your provisioned share size.
-### Solution
+#### Solution
- Increase application parallelism by increasing the number of threads. - Switch to applications where parallelism is possible. For example, for copy operations, you could use AzCopy or RoboCopy from Windows clients or the **parallel** command from Linux clients.
+### Cause 4: Number of SMB channels exceeds four
+
+If you're using SMB MultiChannel and the number of channels you have exceeds four, this will result in poor performance. To determine if your connection count exceeds four, use the PowerShell cmdlet `get-SmbClientConfiguration` to view the current connection count settings.
+
+#### Solution
+
+Set the Windows per NIC setting for SMB so that the total channels don't exceed four. For example, if you have two NICs, you can set the maximum per NIC to two using the following PowerShell cmdlet: `Set-SmbClientConfiguration -ConnectionCountPerRssNetworkInterface 2`.
+ ## Very high latency for requests ### Cause
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-create-workspace.md
Select **Review + create** > **Create**. Your workspace is ready in a few minute
After your Azure Synapse workspace is created, you have two ways to open Synapse Studio:
-* Open your Synapse workspace in the [Azure portal](https://portal.azure.com), in the **Overview** section of the Synapse workspace, select **Open** in the Open Synapse Studio box.
-* Go to the `https://web.azuresynapse.net` and sign in to your workspace.
+1. Open your Synapse workspace in the [Azure portal](https://portal.azure.com), in the **Overview** section of the Synapse workspace, select **Open** in the Open Synapse Studio box.
+1. Go to the `https://web.azuresynapse.net` and sign in to your workspace.
+
+ ![Log in to workspace](./security/media/common/login-workspace.png)
+
+> [!NOTE]
+> To sign into your workspace, there are two **Account selection methods**. One is from **Azure subscription**, the other is from **Enter manually**. If you have the Synapse Azure role or higher level Azure roles, you can use both methods to log into the workspace. If you don't have the related Azure roles, and you were granted as the Synapse RBAC role, **Enter manually** is the only way to log into the workspace. To learn more about the Synapse RBAC, refer to [What is Synapse role-based access control (RBAC)](./security/synapse-workspace-synapse-rbac.md).
+ ## Place sample data into the primary storage account We are going to use a small 100K row sample dataset of NYX Taxi Cab data for many examples in this getting started guide. We begin by placing it in the primary storage account you created for the workspace.
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
Previously updated : 11/20/2020 Last updated : 07/01/2021
You need a Spark table for this tutorial.
## Open the Cognitive Services wizard
-1. Right-click the Spark table created in the previous step. Select **Machine Learning** > **Enrich with existing model** to open the wizard.
+1. Right-click the Spark table created in the previous step. Select **Machine Learning** > **Predict with a model** to open the wizard.
- ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00g.png)
+ ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00g2.png)
2. A configuration panel appears, and you're asked to select a Cognitive Services model. Select **Anomaly Detector**.
- ![Screenshot that shows selection of Anomaly Detector as a model.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00c.png)
+ ![Screenshot that shows selection of Anomaly Detector as a model.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00c2.png)
## Provide authentication details
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
You'll need a Spark table for this tutorial.
## Open the Cognitive Services wizard
-1. Right-click the Spark table created in the previous procedure. Select **Machine Learning** > **Enrich with existing model** to open the wizard.
+1. Right-click the Spark table created in the previous procedure. Select **Machine Learning** > **Predict with a model** to open the wizard.
- ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00d.png)
+ ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00d2.png)
2. A configuration panel appears, and you're asked to select a Cognitive Services model. Select **Text analytics - Sentiment Analysis**.
- ![Screenshot that shows selection of a Cognitive Services model.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00e.png)
+ ![Screenshot that shows selection of a Cognitive Services model.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00e2.png)
## Provide authentication details
synapse-analytics How To Manage Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md
This article shows how to add and delete Synapse RBAC role assignments.
## Open Synapse Studio
-To assign a role to a user, group, service principal, or managed identity, first [open the Synapse Studio](https://web.azuresynapse.net/) and select your workspace.
+To assign a role to a user, group, service principal, or managed identity, first [open the Synapse Studio](https://web.azuresynapse.net/) and log into your workspace.
![Log in to workspace](./media/common/login-workspace.png)
synapse-analytics How To Review Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-review-synapse-rbac-role-assignments.md
With any Synapse RBAC role, you can list Synapse RBAC role assignments for all s
## Open Synapse Studio
-To review role assignments, first [open the Synapse Studio](https://web.azuresynapse.net/) and select your workspace.
+To review role assignments, first [open the Synapse Studio](https://web.azuresynapse.net/) and select your workspace. To sign into your workspace, there are two **Account selection methods**. One is from **Azure subscription**, the other is from **Enter manually**. If you have the Synapse Azure role or higher level Azure roles, you can use both methods to log into the workspace. If you don't have the related Azure roles, and you were granted as the Synapse RBAC role, **Enter manually** is the only way to log into the workspace.
![Log in to workspace](./media/common/login-workspace.png)
synapse-analytics Sql Data Warehouse Partner Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![BI Builders (Xpert BI)](./media/sql-data-warehouse-partner-data-integration/bibuilders-logo.png) |**BI Builders (Xpert BI)**<br> Xpert BI helps organizations build and maintain a robust and scalable data platform in Azure faster through metadata-based automation. It extends Azure Synapse with best practices and DataOps, for agile data development with built-in data governance functionalities. Use Xpert BI to quickly test out and switch between different Azure solutions such as Azure Synapse, Azure Data Lake Storage, and Azure SQL Database, as your business and analytics needs changes and grows.|[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>| | ![BryteFlow](./media/sql-data-warehouse-partner-data-integration/bryteflow-logo.png) |**BryteFlow**<br> With BryteFlow, you can continually replicate data from transactional sources like Oracle, SQL Server, SAP, MySQL, and more to Azure Synapse Analytics in real time, with best practices, and access reconciled data that is ready-to-use. BryteFlow extracts and replicates data in minutes using log-based Change Data Capture and merges deltas automatically to update data. It can be configured with times series as well. There's no coding for any process (just point and select!) and tables are created automatically on the destination. BryteFlow supports enterprise-scale automated data integration with extremely high throughput, ingesting terabytes of data, with smart partitioning, and multi-threaded, parallel loading.|[Product page](https://bryteflow.com/data-integration-on-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bryte.bryteflowingest-azure-standard?tab=Overview)<br>| | ![CData](./media/sql-data-warehouse-partner-data-integration/cdata-logo.png) |**CData Sync - Cloud Data Pipeline**<br>Build high-performance data pipelines for Microsoft Azure Synapse in minutes. CData Sync is an easy-to-use, go-anywhere ETL/ELT pipeline that streamlines data flow from more than 200+ enterprise data sources to Azure Synapse. With CData Sync, users can easily create automated continuous data replication between Accounting, CRM, ERP, Marketing Automation, On-Premises, and cloud data.|[Product page](https://www.cdata.com/sync/to/azuresynapse/?utm_source=azuresynapse&utm_medium=partner)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/cdatasoftware.cdatasync?tab=Overview)<br>|
-| ![Datometry](./media/sql-data-warehouse-partner-data-integration/datometry-logo.png) |**Datometry**<br>Datometry Hyper-Q makes existing applications written for Teradata run natively on Azure Synapse. Datometry emulates commonly used Teradata SQL, including analytical SQL, and advanced operational concepts like stored procedures, macros, SET tables, and more. Because Hyper-Q returns results that are bit-identical to Teradata, existing applications can be replatformed to Azure Synapse without any significant modifications. With Datometry, enterprises can move to Azure rapidly and take full advantage of Synapse immediately.|[Product page](https://datometry.com/solutions/replatforming/migrate-teradata-to-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datometry1601339937807.dtm-hyperq-azure-101?tab=Overview)<br>
+| ![Datometry](./media/sql-data-warehouse-partner-data-integration/datometry-logo.png) |**Datometry**<br>Datometry Hyper-Q makes existing applications written for Teradata run natively on Azure Synapse. Datometry emulates commonly used Teradata SQL, including analytical SQL, and advanced operational concepts like stored procedures, macros, SET tables, and more. Because Hyper-Q returns results that are bit-identical to Teradata, existing applications can be replatformed to Azure Synapse without any significant modifications. With Datometry, enterprises can move to Azure rapidly and take full advantage of Synapse immediately.|[Product page](https://datometry.com/platform/hyper-q-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datometry1601339937807.dtm-hyperq-azure-101?tab=Overview)<br>
| ![Denodo](./media/sql-data-warehouse-partner-data-integration/denodo_logo.png) |**Denodo**<br>Denodo provide real-time access to data across an organization's diverse data sources. It uses data virtualization to bridge data across many sources without replication. Denodo offers broad access to structured and unstructured data residing in enterprise, big data, and cloud sources, in both batch and real time.|[Product page](https://www.denodo.com/en)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/denodo.denodo-platform-7_0-app-byol?tab=Overview)<br> | | ![Dimodelo](./media/sql-data-warehouse-partner-data-integration/dimodelo-logo.png) |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-click deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Product page](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> | | ![Fivetran](./media/sql-data-warehouse-partner-data-integration/fivetran_logo.png) |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Product page](https://fivetran.com/)<br> |
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-use-external-tables.md
CREATE EXTERNAL TABLE Covid (
); ```
+Delta Lake is in public preview and there are some known issues and limitations. Review the known issues on [Synapse serverless SQL pool self-help page](resources-self-help-sql-on-demand.md#delta-lake).
+ ## Use an external table You can use [external tables](develop-tables-external-tables.md) in your queries the same way you use them in SQL Server queries.
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-use-views.md
from openrowset(
) as rows ```
+Delta Lake is in public preview and there are some known issues and limitations. Review the known issues on [Synapse serverless SQL pool self-help page](resources-self-help-sql-on-demand.md#delta-lake).
+ ## Partitioned views If you have a set of files that is partitioned in the hierarchical folder structure, you can describe the partition pattern using the wildcards in the file path. Use the `FILEPATH` function to expose parts of the folder path as partitioning columns.
The folder name in the `OPENROWSET` function (`yellow` in this example) that is
> [!div class="mx-imgBorder"] >![Yellow Taxi Delta Lake folder](./media/shared/yellow-taxi-delta-lake.png)
-Do not use the `WITH` clause in the `OPENROWSET` function when you query partitioned Delta Lake data. Due to the known issue in the preview, the `WITH` clause will not properly return the values from the underlying partitioning columns. Partition elimination works fine if you are directly using the `OPENROWSET` function with the `WITH` clause (without views).
+Do not use the `WITH` clause in the `OPENROWSET` function when you query partitioned Delta Lake data. Due to the known issue in the preview, the `WITH` clause will [not properly return the values from the underlying partitioning columns](resources-self-help-sql-on-demand.md#partitioning-column-returns-null-values). Partition elimination works fine if you are directly using the `OPENROWSET` function with the `WITH` clause (without views).
+
+Delta Lake is in public preview and there are some known issues and limitations. Review the known issues on [Synapse serverless SQL pool self-help page](resources-self-help-sql-on-demand.md#delta-lake).
## Use a view
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
GROUP BY geo_id
In this example, the number of cases is stored either as `int32`, `int64`, or `float64` values. All values must be extracted to calculate the number of cases per country.
-## Known issues
+## Troubleshooting
-- A serverless SQL pool will return a compile-time warning if the `OPENROWSET` column collation doesn't have UTF-8 encoding. You can easily change the default collation for all `OPENROWSET` functions running in the current database by using the T-SQL statement `alter database current collate Latin1_General_100_CI_AS_SC_UTF8`.-
-Possible errors and troubleshooting actions are listed in the following table.
-
-| Error | Root cause |
-| | |
-| Syntax errors:<br/> - Incorrect syntax near `Openrowset`<br/> - `...` is not a recognized `BULK OPENROWSET` provider option.<br/> - Incorrect syntax near `...` | Possible root causes:<br/> - Not using CosmosDB as the first parameter.<br/> - Using a string literal instead of an identifier in the third parameter.<br/> - Not specifying the third parameter (container name). |
-| There was an error in the CosmosDB connection string. | - The account, database, or key isn't specified. <br/> - There's some option in a connection string that isn't recognized.<br/> - A semicolon (`;`) is placed at the end of a connection string. |
-| Resolving CosmosDB path has failed with the error "Incorrect account name" or "Incorrect database name." | The specified account name, database name, or container can't be found, or analytical storage hasn't been enabled to the specified collection.|
-| Resolving CosmosDB path has failed with the error "Incorrect secret value" or "Secret is null or empty." | The account key isn't valid or is missing. |
-| Column `column name` of the type `type name` isn't compatible with the external data type `type name`. | The specified column type in the `WITH` clause doesn't match the type in the Azure Cosmos DB container. Try to change the column type as it's described in the section [Azure Cosmos DB to SQL type mappings](#azure-cosmos-db-to-sql-type-mappings), or use the `VARCHAR` type. |
-| Column contains `NULL` values in all cells. | Possibly a wrong column name or path expression in the `WITH` clause. The column name (or path expression after the column type) in the `WITH` clause must match some property name in the Azure Cosmos DB collection. Comparison is *case-sensitive*. For example, `productCode` and `ProductCode` are different properties. |
-
-You can report suggestions and issues on the [Azure Synapse Analytics feedback page](https://feedback.azure.com/forums/307516-azure-synapse-analytics?category_id=387862).
+Review the [self-help page](resources-self-help-sql-on-demand.md#cosmos-db) to find the known issues or troubleshooting steps that can help you to resolve potential problems with Cosmos DB queries.
## Next steps
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-delta-lake-format.md
The second argument of `DeltaTable.convertToDeltaLake` function represents the p
## Limitations -- Schema inference doesn't work if you have complex data types. For complex data types, use explicit `WITH` schema and specify `VARCHAR(MAX)` type. -- The `OPENROWSET` function doesn't support updating a Delta Lake file or time travel. Use Apache Spark engine to perform these actions.
+This feature is in public preview and there are some known issues and limitations. Review the known issues on [Synapse serverless SQL pool self-help page](resources-self-help-sql-on-demand.md#delta-lake).
## Next steps
If you want to continue building Delta Lake solution, learn how to create [views
- [Learn how to use Delta Lake in Apache Spark pools for Azure Synapse Analytics](../spark/apache-spark-delta-lake-overview.md) - [Azure Databricks Delta Lake best practices](/azure/databricks/best-practices-index) - [Delta Lake Documentation Page](https://docs.delta.io/latest/delta-intro.html)
+- [Know issues and limitations](resources-self-help-sql-on-demand.md#delta-lake)
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Create a separate database and reference the synchronized [tables](../metadata/t
## Cosmos DB
+Possible errors and troubleshooting actions are listed in the following table.
+
+| Error | Root cause |
+| | |
+| Syntax errors:<br/> - Incorrect syntax near `Openrowset`<br/> - `...` is not a recognized `BULK OPENROWSET` provider option.<br/> - Incorrect syntax near `...` | Possible root causes:<br/> - Not using CosmosDB as the first parameter.<br/> - Using a string literal instead of an identifier in the third parameter.<br/> - Not specifying the third parameter (container name). |
+| There was an error in the CosmosDB connection string. | - The account, database, or key isn't specified. <br/> - There's some option in a connection string that isn't recognized.<br/> - A semicolon (`;`) is placed at the end of a connection string. |
+| Resolving CosmosDB path has failed with the error "Incorrect account name" or "Incorrect database name." | The specified account name, database name, or container can't be found, or analytical storage hasn't been enabled to the specified collection.|
+| Resolving CosmosDB path has failed with the error "Incorrect secret value" or "Secret is null or empty." | The account key isn't valid or is missing. |
+| Column `column name` of the type `type name` isn't compatible with the external data type `type name`. | The specified column type in the `WITH` clause doesn't match the type in the Azure Cosmos DB container. Try to change the column type as it's described in the section [Azure Cosmos DB to SQL type mappings](query-cosmos-db-analytical-store.md#azure-cosmos-db-to-sql-type-mappings), or use the `VARCHAR` type. |
+| Column contains `NULL` values in all cells. | Possibly a wrong column name or path expression in the `WITH` clause. The column name (or path expression after the column type) in the `WITH` clause must match some property name in the Azure Cosmos DB collection. Comparison is *case-sensitive*. For example, `productCode` and `ProductCode` are different properties. |
+
+You can report suggestions and issues on the [Azure Synapse Analytics feedback page](https://feedback.azure.com/forums/307516-azure-synapse-analytics?category_id=387862).
+
+### UTF-8 collation warning is returned while reading CosmosDB string types
+
+A serverless SQL pool will return a compile-time warning if the `OPENROWSET` column collation doesn't have UTF-8 encoding. You can easily change the default collation for all `OPENROWSET` functions running in the current database by using the T-SQL statement `alter database current collate Latin1_General_100_CI_AS_SC_UTF8`.
+
+[Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) provides the best performance when you filter your data using string predicates.
+ ### Some rows are not returned - There is a synchronization delay between transactional and analytical store. The document that you entered in the Cosmos DB transactional store might appear in analytical store after 2-3 minutes.
Synapse SQL will return `NULL` instead of the values that you see in the transac
The value specified in the `WITH` clause doesn't match the underlying Cosmos DB types in analytical storage and cannot be implicitly converted. Use `VARCHAR` type in the schema.
-### Performance issues
+### CosmosDB performance issues
If you are experiencing some unexpected performance issues, make sure that you applied the best practices, such as: - Make sure that you have placed the client application, serverless pool, and Cosmos DB analytical storage in [the same region](best-practices-serverless-sql-pool.md#colocate-your-cosmosdb-analytical-storage-and-serverless-sql-pool).
Cannot find value of partitioning column '<column name>' in file
**Workaround:** Try to update your Delta Lake data set using Apache Spark pools and use some value (empty string or `"null"`) instead of `null` in the partitioning column.
+### JSON text is not properly formatted
+
+This error indicates that serverless SQL pool cannot read Delta Lake transaction log. You will probably see the error like the following error:
+
+```
+Msg 13609, Level 16, State 4, Line 1
+JSON text is not properly formatted. Unexpected character '{' is found at position 263934.
+Msg 16513, Level 16, State 0, Line 1
+Error reading external metadata.
+```
+
+- Verify that you can read the content of the Delta Lake folder using Apache Spark pool in Synapse or Databricks cluster. This way you will ensure that the `_delta_log` file is not corrupted.
+- Verify that you can read the content of data files by specifying `FORMAT='PARQUET'` and using recursive wildcard `/**` at the end of the URI path. If you can read all Parquet files, the issue is in `_delta_log` transaction log folder.
+
+In this case, report a support ticket and provide a repro to Azure support:
+- Do not make any changes like adding/removing the columns or optimizing the table because this might change the state of Delta Lake transaction log files.
+- Copy the content of `_delta_log` folder into a new empty folder. **DO NOT** copy `.parquet data` files.
+- Try to read the content that you copied in new folder and verify that you are getting the same error.
+- Now you can continue using Delta Lake folder with Spark pool. You will provide copied data to Microsoft support if you are allowed to share this.
+- Send the content of the copied `_delta_log` file to Azure support.
+
+Microsoft team will investigate the content of the `delta_log` file and provide more info about the possible errors and workarounds.
+ ## Constraints There are some general system constraints that may affect your workload:
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dav4-dasv4-series.md
Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz and use premium SSD. The Dasv4-series sizes offer a combination of vCPU, memory and temporary storage for most production workloads.
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max NICs | Expected network bandwidth (Mbps) |
-|--|--|--|--|--|--|--|--|--|
-| Standard_D2as_v4|2|8|16|4|4000 / 32 (50)|3200 / 48|2 | 800 |
-| Standard_D4as_v4|4|16|32|8|8000 / 64 (100)|6400 / 96|2 | 1600 |
-| Standard_D8as_v4|8|32|64|16|16000 / 128 (200)|12800 / 192|4 | 3200 |
-| Standard_D16as_v4|16|64|128|32|32000 / 255 (400)|25600 / 384|8 | 6400 |
-| Standard_D32as_v4|32|128|256|32|64000 / 510 (800)|51200 / 768|8 | 12800 |
-| Standard_D48as_v4|48|192|384|32|96000 / 1020 (1200)|76800 / 1148|8 | 19200 |
-| Standard_D64as_v4|64|256|512|32|128000 / 1020 (1600)|80000 / 1200|8 | 25600 |
-| Standard_D96as_v4|96|384|768|32|192000 / 1020 (2400)|80000 / 1200|8 | 32000 |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
+|--|--|--|--|--|--|--|--|--|--|
+| Standard_D2as_v4|2|8|16|4|4000 / 32 (50)|3200 / 48| 4000/200 | 2 | 800 |
+| Standard_D4as_v4|4|16|32|8|8000 / 64 (100)|6400 / 96| 8000/200 |2 | 1600 |
+| Standard_D8as_v4|8|32|64|16|16000 / 128 (200)|12800 / 192| 16000/400 |4 | 3200 |
+| Standard_D16as_v4|16|64|128|32|32000 / 255 (400)|25600 / 384| 32000/800 |8 | 6400 |
+| Standard_D32as_v4|32|128|256|32|64000 / 510 (800)|51200 / 768| 64000/1600 |8 | 12800 |
+| Standard_D48as_v4|48|192|384|32|96000 / 1020 (1200)|76800 / 1148| 80000/2000 |8 | 19200 |
+| Standard_D64as_v4|64|256|512|32|128000 / 1020 (1600)|80000 / 1200| 80000/2000 |8 | 25600 |
+| Standard_D96as_v4|96|384|768|32|192000 / 1020 (2400)|80000 / 1200| 80000/2000 |8 | 32000 |
+
+<sup>1</sup> Dasv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ddv4-ddsv4-series.md
The new Ddsv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB)
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | <sup>**</sup> Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected Network bandwidth (Mbps) |
-||||||||||
-| Standard_D2ds_v4 | 2 | 8 | 75 | 4 | 19000/120(50) | 3200/48 | 2|1000 |
-| Standard_D4ds_v4 | 4 | 16 | 150 | 8 | 38500/242(100) | 6400/96 | 2|2000 |
-| Standard_D8ds_v4 | 8 | 32 | 300 | 16 | 77000/485(200) | 12800/192 | 4|4000 |
-| Standard_D16ds_v4 | 16 | 64 | 600 | 32 | 154000/968(400) | 25600/384 | 8|8000 |
-| Standard_D32ds_v4 | 32 | 128 | 1200 | 32 | 308000/1936(800) | 51200/768 | 8|16000 |
-| Standard_D48ds_v4 | 48 | 192 | 1800 | 32 | 462000/2904(1200) | 76800/1152 | 8|24000 |
-| Standard_D64ds_v4 | 64 | 256 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 8|30000 |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | <sup>**</sup> Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Expected Network bandwidth (Mbps) |
+|||||||||||
+| Standard_D2ds_v4 | 2 | 8 | 75 | 4 | 19000/120(50) | 3200/48 | 4000/200 | 2|1000 |
+| Standard_D4ds_v4 | 4 | 16 | 150 | 8 | 38500/242(100) | 6400/96 | 8000/200 | 2|2000 |
+| Standard_D8ds_v4 | 8 | 32 | 300 | 16 | 77000/485(200) | 12800/192 | 16000/400 | 4|4000 |
+| Standard_D16ds_v4 | 16 | 64 | 600 | 32 | 154000/968(400) | 25600/384 | 32000/800 | 8|8000 |
+| Standard_D32ds_v4 | 32 | 128 | 1200 | 32 | 308000/1936(800) | 51200/768 | 64000/1600 | 8|16000 |
+| Standard_D48ds_v4 | 48 | 192 | 1800 | 32 | 462000/2904(1200) | 76800/1152 | 80000/2000 | 8|24000 |
+| Standard_D64ds_v4 | 64 | 256 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 80000/2000 | 8|30000 |
<sup>**</sup> These IOPs values can be achieved by using [Gen2 VMs](generation-2.md)
+<sup>1</sup> Ddsv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+ [!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)] ## Other sizes and information
virtual-machines Disk Bursting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disk-bursting.md
Title: Managed disk bursting
description: Learn about disk bursting for Azure disks and Azure virtual machines. Previously updated : 06/28/2021 Last updated : 06/29/2021 -+
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disk-encryption.md
Title: Server-side encryption of Azure managed disks description: Azure Storage protects your data by encrypting it at rest before persisting it to Storage clusters. You can use customer-managed keys to manage encryption with your own keys, or you can rely on Microsoft-managed keys for the encryption of your managed disks. Previously updated : 04/15/2021 Last updated : 06/29/2021 -+
virtual-machines Disks Benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-benchmarks.md
Title: Benchmark your application on Azure Disk Storage
description: Learn about the process of benchmarking your application on Azure. Previously updated : 01/29/2021 Last updated : 06/29/2021 -+ # Benchmark a disk
virtual-machines Disks Change Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-change-performance.md
Title: Performance tiers for Azure managed disks description: Learn about performance tiers for managed disks. -+ Previously updated : 03/02/2021 Last updated : 06/29/2021
virtual-machines Disks Enable Bursting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-bursting.md
Title: Enable on-demand disk bursting
description: Enable on-demand disk bursting on your managed disk. Previously updated : 06/28/2021 Last updated : 06/29/2021 -+
virtual-machines Disks Enable Customer Managed Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-customer-managed-keys-portal.md
Title: Azure portal - Enable customer-managed keys with SSE - managed disks
description: Enable customer-managed keys on your managed disks through the Azure portal. Previously updated : 05/17/2021 Last updated : 06/29/2021 -+
virtual-machines Disks Enable Double Encryption At Rest Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-double-encryption-at-rest-portal.md
Title: Enable double encryption at rest - Azure portal - managed disks
description: Enable double encryption at rest for your managed disk data using the Azure portal. Previously updated : 08/24/2020 Last updated : 06/29/2021 -+
virtual-machines Disks Enable Host Based Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-host-based-encryption-portal.md
Title: Enable end-to-end encryption using encryption at host - Azure portal - managed disks description: Use encryption at host to enable end-to-end encryption on your Azure managed disks - Azure portal. -+ Last updated 07/01/2021
virtual-machines Disks Enable Private Links For Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-private-links-for-import-export-portal.md
Title: Azure portal - Restrict import/export access to managed disks with Private Links description: Enable Private Links for your managed disks with Azure portal. Allowing you to securely export and import disks within your virtual network. -+ Previously updated : 08/24/2020 Last updated : 06/29/2021
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-ultra-ssd.md
Title: Ultra disks for VMs - Azure managed disks description: Learn about ultra disks for Azure VMs -+ Previously updated : 05/12/2021 Last updated : 06/29/2021
virtual-machines Disks Find Unattached Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-find-unattached-portal.md
Title: Identify unattached Azure disks - Azure portal description: How to find unattached Azure managed and unmanaged (VHDs/page blobs) disks by using the Azure portal. -+ Previously updated : 01/26/2021 Last updated : 06/29/2021
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-incremental-snapshots.md
Title: Create an incremental snapshot description: Learn about incremental snapshots for managed disks, including how to create them using the Azure portal, Azure PowerShell module, and Azure Resource Manager. -+ Previously updated : 01/15/2021 Last updated : 06/29/2021
virtual-machines Disks Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-metrics.md
Title: Disk metrics description: Examples of disk bursting metrics -+ Previously updated : 02/12/2021 Last updated : 06/29/2021
virtual-machines Disks Performance Tiers Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-performance-tiers-portal.md
Title: Change the performance of Azure managed disks using the Azure portal description: Learn how to change performance tiers for new and existing managed disks using the Azure portal. -+ Previously updated : 05/13/2021 Last updated : 06/29/2021
virtual-machines Disks Performance Tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-performance-tiers.md
Title: Change the performance of Azure managed disks - CLI/PowerShell description: Learn how to change performance tiers for existing managed disks using either the Azure PowerShell module or the Azure CLI. -+ Previously updated : 05/13/2021 Last updated : 06/29/2021
virtual-machines Disks Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-performance.md
Title: Virtual machine and disk performance description: Learn more about how virtual machines and their attached disks work in combination for performance.-- Previously updated : 10/12/2020++ Last updated : 06/29/2021 -+ # Virtual machine and disk performance
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-redundancy.md
Title: Redundancy options for Azure managed disks
description: Learn about zone-redundant storage and locally-redundant storage for Azure managed disks. Previously updated : 05/26/2021 Last updated : 06/29/2021 -+
virtual-machines Disks Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-reserved-capacity.md
Title: Optimize costs for Azure Disk Storage with reservations
description: Learn about purchasing Azure Disk Storage reservations to save costs on premium SSD managed disks. Previously updated : 01/30/2020 Last updated : 06/29/2021 -+
virtual-machines Disks Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-scalability-targets.md
Title: Scalability and performance targets for VM disks
description: Learn about scalability and performance targets for virtual machine disks attached to VMs. Previously updated : 06/02/2021 Last updated : 06/29/2021 -+
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared-enable.md
Title: Enable shared disks for Azure managed disks description: Configure an Azure managed disk with shared disks so that you can share it across multiple VMs -+ Previously updated : 06/16/2021 Last updated : 06/29/2021
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared.md
Title: Share an Azure managed disk across VMs description: Learn about sharing Azure managed disks across multiple Linux VMs. -+ Previously updated : 05/10/2021 Last updated : 06/29/2021
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 06/24/2021 Last updated : 06/29/2021 -+
virtual-machines Disks Use Storage Explorer Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-use-storage-explorer-managed-disks.md
Last updated 09/25/2019 -+
virtual-machines Dv4 Dsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv4-dsv4-series.md
Dsv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake)
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected Network bandwidth (Mbps) |
-|||||||||
-| Standard_D2s_v4 | 2 | 8 | Remote Storage Only | 4 | 3200/48 | 2|1000 |
-| Standard_D4s_v4 | 4 | 16 | Remote Storage Only | 8 | 6400/96 | 2|2000 |
-| Standard_D8s_v4 | 8 | 32 | Remote Storage Only | 16 | 12800/192 | 4|4000 |
-| Standard_D16s_v4 | 16 | 64 | Remote Storage Only | 32 | 25600/384 | 8|8000 |
-| Standard_D32s_v4 | 32 | 128 | Remote Storage Only | 32 | 51200/768 | 8|16000 |
-| Standard_D48s_v4 | 48 | 192 | Remote Storage Only | 32 | 76800/1152 | 8|24000 |
-| Standard_D64s_v4 | 64 | 256 | Remote Storage Only | 32 | 80000/1200 | 8|30000 |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Expected Network bandwidth (Mbps) |
+||||||||||
+| Standard_D2s_v4 | 2 | 8 | Remote Storage Only | 4 | 3200/48 | 4000/200 |2|1000 |
+| Standard_D4s_v4 | 4 | 16 | Remote Storage Only | 8 | 6400/96 | 8000/200 |2|2000 |
+| Standard_D8s_v4 | 8 | 32 | Remote Storage Only | 16 | 12800/192 | 16000/400 |4|4000 |
+| Standard_D16s_v4 | 16 | 64 | Remote Storage Only | 32 | 25600/384 | 32000/800 |8|8000 |
+| Standard_D32s_v4 | 32 | 128 | Remote Storage Only | 32 | 51200/768 | 64000/1600 |8|16000 |
+| Standard_D48s_v4 | 48 | 192 | Remote Storage Only | 32 | 76800/1152 | 80000/2000 |8|24000 |
+| Standard_D64s_v4 | 64 | 256 | Remote Storage Only | 32 | 80000/1200 | 80000/2000 |8|30000 |
+
+<sup>1</sup> Dsv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/eav4-easv4-series.md
Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
Easv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz and use premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications.
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max NICs | Expected network bandwidth (Mbps) |
-|--|--|--|--|--|--|--|--|--|
-| Standard_E2as_v4|2|16|32|4|4000 / 32 (50)|3200 / 48|2 | 800 |
-| Standard_E4as_v4|4|32|64|8|8000 / 64 (100)|6400 / 96|2 | 1600 |
-| Standard_E8as_v4|8|64|128|16|16000 / 128 (200)|12800 / 192|4 | 3200 |
-| Standard_E16as_v4|16|128|256|32|32000 / 255 (400)|25600 / 384|8 | 6400 |
-| Standard_E20as_v4|20|160|320|32|40000 / 320 (500)|32000 / 480|8 | 8000 |
-| Standard_E32as_v4|32|256|512|32|64000 / 510 (800)|51200 / 768|8 | 12800 |
-| Standard_E48as_v4|48|384|768|32|96000 / 1020 (1200)|76800 / 1148|8 | 19200 |
-| Standard_E64as_v4|64|512|1024|32|128000 / 1020 (1600)|80000 / 1200|8 | 25600 |
-| Standard_E96as_v4 <sup>1</sup>|96|672|1344|32|192000 / 1020 (2400)|80000 / 1200|8 | 32000 |
-
-<sup>1</sup> [Constrained core sizes available](./constrained-vcpu.md).
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
+|--|--|--|--|--|--|--|--|--|--|
+| Standard_E2as_v4|2|16|32|4|4000 / 32 (50)|3200 / 48| 4000/200 |2 | 800 |
+| Standard_E4as_v4|4|32|64|8|8000 / 64 (100)|6400 / 96| 8000/200 |2 | 1600 |
+| Standard_E8as_v4|8|64|128|16|16000 / 128 (200)|12800 / 192| 16000/400 |4 | 3200 |
+| Standard_E16as_v4|16|128|256|32|32000 / 255 (400)|25600 / 384| 32000/800 |8 | 6400 |
+| Standard_E20as_v4|20|160|320|32|40000 / 320 (500)|32000 / 480| 40000/1000 |8 | 8000 |
+| Standard_E32as_v4|32|256|512|32|64000 / 510 (800)|51200 / 768| 64000/1600 |8 | 12800 |
+| Standard_E48as_v4|48|384|768|32|96000 / 1020 (1200)|76800 / 1148| 80000/2000 |8 | 19200 |
+| Standard_E64as_v4|64|512|1024|32|128000 / 1020 (1600)|80000 / 1200| 80000/2000 |8 | 25600 |
+| Standard_E96as_v4 <sup>2</sup>|96|672|1344|32|192000 / 1020 (2400)|80000 / 1200| 80000/2000 |8 | 32000 |
+
+<sup>1</sup> Easv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+<sup>2</sup> [Constrained core sizes available](./constrained-vcpu.md).
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/edv4-edsv4-series.md
Edsv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | <sup>**</sup> Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected Network bandwidth (Mbps) |
-||||||||||
-| Standard_E2ds_v4 | 2 | 16 | 75 | 4 | 19000/120(50) | 3200/48 | 2|1000 |
-| Standard_E4ds_v4 <sup>1</sup> | 4 | 32 | 150 | 8 | 38500/242(100) | 6400/96 | 2|2000 |
-| Standard_E8ds_v4 <sup>1</sup> | 8 | 64 | 300 | 16 | 77000/485(200) | 12800/192 | 4|4000 |
-| Standard_E16ds_v4 <sup>1</sup> | 16 | 128 | 600 | 32 | 154000/968(400) | 25600/384 | 8|8000 |
-| Standard_E20ds_v4 | 20 | 160 | 750 | 32 | 193000/1211(500) | 32000/480 | 8|10000 |
-| Standard_E32ds_v4 <sup>1</sup> | 32 | 256 | 1200 | 32 | 308000/1936(800) | 51200/768 | 8|16000 |
-| Standard_E48ds_v4 | 48 | 384 | 1800 | 32 | 462000/2904(1200) | 76800/1152 | 8|24000 |
-| Standard_E64ds_v4 <sup>1</sup> | 64 | 504 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 8|30000 |
-| Standard_E80ids_v4 <sup>2</sup> | 80 | 504 | 2400 | 32 | 615000/3872(1600) | 80000/1500 | 8|30000 |
-
-<sup>1</sup> [Constrained core sizes available](./constrained-vcpu.md).
-
-<sup>2</sup> Instance is isolated to hardware dedicated to a single customer.
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | <sup>**</sup> Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Expected Network bandwidth (Mbps) |
+|||||||||||
+| Standard_E2ds_v4 | 2 | 16 | 75 | 4 | 19000/120(50) | 3200/48 | 4000/200 | 2|1000 |
+| Standard_E4ds_v4 | 4 | 32 | 150 | 8 | 38500/242(100) | 6400/96 | 8000/200 | 2|2000 |
+| Standard_E8ds_v4 | 8 | 64 | 300 | 16 | 77000/485(200) | 12800/192 | 16000/400 | 4|4000 |
+| Standard_E16ds_v4 | 16 | 128 | 600 | 32 | 154000/968(400) | 25600/384 | 32000/800 | 8|8000 |
+| Standard_E20ds_v4 | 20 | 160 | 750 | 32 | 193000/1211(500) | 32000/480 | 40000/1000 | 8|10000 |
+| Standard_E32ds_v4 | 32 | 256 | 1200 | 32 | 308000/1936(800) | 51200/768 | 64000/1600 | 8|16000 |
+| Standard_E48ds_v4 | 48 | 384 | 1800 | 32 | 462000/2904(1200) | 76800/1152 | 80000/2000 | 8|24000 |
+| Standard_E64ds_v4 <sup>1</sup> | 64 | 504 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 80000/2000 | 8|30000 |
+| Standard_E80ids_v4 <sup>2</sup> | 80 | 504 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 80000/2000 | 8|30000 |
+
+<sup>**</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)
+
+<sup>1</sup> Easv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+<sup>2</sup> [Constrained core sizes available)](./constrained-vcpu.md).
+
+<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.
-<sup>**</sup> These IOPs values can be achieved by using [Gen2 VMs](generation-2.md)
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ev4-esv4-series.md
Esv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake)
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected Network bandwidth (Mbps) |
-|||||||||
-| Standard_E2s_v4 | 2 | 16 | Remote Storage Only | 4 | 3200/48 | 2|1000 |
-| Standard_E4s_v4 <sup>1</sup> | 4 | 32 | Remote Storage Only | 8 | 6400/96 | 2|2000 |
-| Standard_E8s_v4 <sup>1</sup> | 8 | 64 | Remote Storage Only | 16 | 12800/192 | 4|4000 |
-| Standard_E16s_v4 <sup>1</sup> | 16 | 128 | Remote Storage Only | 32 | 25600/384 | 8|8000 |
-| Standard_E20s_v4 | 20 | 160 | Remote Storage Only | 32 | 32000/480 | 8|10000 |
-| Standard_E32s_v4 <sup>1</sup> | 32 | 256 | Remote Storage Only | 32 | 51200/768 | 8|16000 |
-| Standard_E48s_v4 | 48 | 384 | Remote Storage Only | 32 | 76800/1152 | 8|24000 |
-| Standard_E64s_v4 <sup>1</sup> | 64 | 504| Remote Storage Only | 32 | 80000/1200 | 8|30000 |
-| Standard_E80is_v4 <sup>2</sup> | 80 | 504 | Remote Storage Only | 32 | 80000/1500 | 8|30000 |
-
-<sup>1</sup> [Constrained core sizes available](./constrained-vcpu.md).
-
-<sup>2</sup> Instance is isolated to hardware dedicated to a single customer.
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> |Max NICs|Expected Network bandwidth (Mbps) |
+||||||||||
+| Standard_E2s_v4 | 2 | 16 | Remote Storage Only | 4 | 3200/48 | 4000/200 | 2|1000 |
+| Standard_E4s_v4 | 4 | 32 | Remote Storage Only | 8 | 6400/96 | 8000/200 | 2|2000 |
+| Standard_E8s_v4 | 8 | 64 | Remote Storage Only | 16 | 12800/192 | 16000/400 | 4|4000 |
+| Standard_E16s_v4 | 16 | 128 | Remote Storage Only | 32 | 25600/384 | 32000/800 | 8|8000 |
+| Standard_E20s_v4 | 20 | 160 | Remote Storage Only | 32 | 32000/480 | 40000/1000 | 8|10000 |
+| Standard_E32s_v4 | 32 | 256 | Remote Storage Only | 32 | 51200/768 | 64000/1600 | 8|16000 |
+| Standard_E48s_v4 | 48 | 384 | Remote Storage Only | 32 | 76800/1152 | 80000/2000 | 8|24000 |
+| Standard_E64s_v4 <sup>2</sup> | 64 | 504| Remote Storage Only | 32 | 80000/1200 | 80000/2000 | 8|30000 |
+| Standard_E80is_v4 <sup>3</sup> | 80 | 504 | Remote Storage Only | 32 | 80000/1200 | 80000/2000 | 8|30000 |
+
+<sup>1</sup> Esv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+<sup>2</sup> [Constrained core sizes available)](./constrained-vcpu.md).
+
+<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/fsv2-series.md
Fsv2-series VMs feature Intel® Hyper-Threading Technology.
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> <br>
-| Size | vCPU's | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected network bandwidth (Mbps) |
-||||||||||
-| Standard_F2s_v2 | 2 | 4 | 16 | 4 | 4000/31 (32) | 3200/47 | 2|875 |
-| Standard_F4s_v2 | 4 | 8 | 32 | 8 | 8000/63 (64) | 6400/95 | 2|1750 |
-| Standard_F8s_v2 | 8 | 16 | 64 | 16 | 16000/127 (128) | 12800/190 | 4|3500 |
-| Standard_F16s_v2 | 16 | 32 | 128 | 32 | 32000/255 (256) | 25600/380 | 4|7000 |
-| Standard_F32s_v2 | 32 | 64 | 256 | 32 | 64000/512 (512) | 51200/750 | 8|14000 |
-| Standard_F48s_v2 | 48 | 96 | 384 | 32 | 96000/768 (768) | 76800/1100 | 8|21000 |
-| Standard_F64s_v2 | 64 | 128 | 512 | 32 | 128000/1024 (1024) | 80000/1100 | 8|28000 |
-| Standard_F72s_v2<sup>1, 2</sup> | 72 | 144 | 576 | 32 | 144000/1152 (1520) | 80000/1100 | 8|30000 |
-
-<sup>1</sup> The use of more than 64 vCPU require one of these supported guest operating systems:
+| Size | vCPU's | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> |Max NICs|Expected network bandwidth (Mbps) |
+|||||||||||
+| Standard_F2s_v2 | 2 | 4 | 16 | 4 | 4000/31 (32) | 3200/47 | 4000/200 | 2|875 |
+| Standard_F4s_v2 | 4 | 8 | 32 | 8 | 8000/63 (64) | 6400/95 | 8000/200 | 2|1750 |
+| Standard_F8s_v2 | 8 | 16 | 64 | 16 | 16000/127 (128) | 12800/190 | 16000/400 | 4|3500 |
+| Standard_F16s_v2 | 16 | 32 | 128 | 32 | 32000/255 (256) | 25600/380 | 32000/800 | 4|7000 |
+| Standard_F32s_v2 | 32 | 64 | 256 | 32 | 64000/512 (512) | 51200/750 | 64000/1600 | 8|14000 |
+| Standard_F48s_v2 | 48 | 96 | 384 | 32 | 96000/768 (768) | 76800/1100 | 80000/2000 | 8|21000 |
+| Standard_F64s_v2 | 64 | 128 | 512 | 32 | 128000/1024 (1024) | 80000/1100 | 80000/2000 | 8|28000 |
+| Standard_F72s_v2<sup>2, 3</sup> | 72 | 144 | 576 | 32 | 144000/1152 (1520) | 80000/1100 | 80000/2000 | 8|30000 |
+
+<sup>1</sup> Fsv2-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+<sup>2</sup> The use of more than 64 vCPU require one of these supported guest operating systems:
- Windows Server 2016 or later - Ubuntu 16.04 LTS or later, with Azure tuned kernel (4.15 kernel or later)
Fsv2-series VMs feature Intel® Hyper-Threading Technology.
- Debian 9 with the backports kernel, Debian 10 or later - CoreOS with a 4.14 kernel or later
-<sup>2</sup> Instance is isolated to hardware dedicated to a single customer.
+<sup>3</sup> Instance is isolated to hardware dedicated to a single customer.
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Convert Unmanaged To Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/convert-unmanaged-to-managed-disks.md
Title: Convert a Linux VM from unmanaged disks to managed disks description: How to convert a Linux VM from unmanaged disks to managed disks by using Azure CLI. -+ Last updated 12/15/2017
virtual-machines Disks Enable Customer Managed Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-enable-customer-managed-keys-cli.md
Title: Azure CLI - Enable customer-managed keys with SSE - managed disks description: Enable customer-managed keys on your managed disks with the Azure CLI. Previously updated : 08/24/2020 Last updated : 06/29/2021 -+
virtual-machines Disks Enable Double Encryption At Rest Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-enable-double-encryption-at-rest-cli.md
Title: Enable double encryption at rest - Azure CLI - managed disks description: Enable double encryption at rest for your managed disk data using the Azure CLI. Previously updated : 08/24/2020 Last updated : 06/29/2021 -+
virtual-machines Disks Enable Host Based Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-enable-host-based-encryption-cli.md
Title: Enable end-to-end encryption using encryption at host - Azure CLI - managed disks description: Use encryption at host to enable end-to-end encryption on your Azure managed disks. -+ Last updated 07/01/2021
virtual-machines Disks Export Import Private Links Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-export-import-private-links-cli.md
Title: Azure CLI - Restrict import/export access to managed disks with Private Links description: Enable Private Links for your managed disks with Azure CLI. Allowing you to securely export and import disks within only your virtual network. -+ Previously updated : 08/11/2020 Last updated : 06/29/2021
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 06/15/2020 Last updated : 06/29/2021 -+
virtual-machines Find Unattached Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/find-unattached-disks.md
Title: Azure CLI - Find and delete unattached managed and unmanaged disks description: How to find and delete unattached Azure managed and unmanaged (VHDs/page blobs) disks by using Azure CLI. -+ Last updated 03/30/2018
virtual-machines Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/incremental-snapshots.md
Title: Use incremental snapshots for backup and recovery of unmanaged disks description: Create a custom solution for backup and recovery of your Azure virtual machine disks using incremental snapshots. -+
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/managed-disks-overview.md
Title: Azure Disk Storage overview description: Overview of Azure managed disks, which handle the storage accounts for you when using VMs. -+ Previously updated : 04/24/2020 Last updated : 06/29/2021
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nct4-v3-series.md
# NCasT4_v3-series
-The NCasT4_v3-series virtual machines are powered by [Nvidia Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/) GPUs and AMD EPYC 7V12(Rome) CPUs. The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores, and 440 GiB of system memory. These virtual machines are ideal for deploying AI services- such as real-time inferencing of user generated requests, or for interactive graphics and visualization workloads using NVIDIA's GRID driver and virtual GPU technology. Standard GPU compute workloads based around CUDA, TensorRT, Caffe, ONNX and other frameworks, or GPU-accelerated graphical applications based OpenGL and DirectX can be deployed economically, with close proximity to users, on the NCasT4_v3 series.
+The NCasT4_v3-series virtual machines are powered by [Nvidia Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/) GPUs and AMD EPYC 7V12(Rome) CPUs. The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores(base frequency of 2.45 GHz, all-cores peak frequency of 3.1 GHz and single-core peak frequency of 3.3 GHz) and 440 GiB of system memory. These virtual machines are ideal for deploying AI services- such as real-time inferencing of user-generated requests, or for interactive graphics and visualization workloads using NVIDIA's GRID driver and virtual GPU technology. Standard GPU compute workloads based around CUDA, TensorRT, Caffe, ONNX and other frameworks, or GPU-accelerated graphical applications based on OpenGL and DirectX can be deployed economically, with close proximity to users, on the NCasT4_v3 series.
<br> [ACU](acu.md): 230-260<br> [Premium Storage](premium-storage-performance.md): Supported<br> [Premium Storage caching](premium-storage-performance.md): Supported<br>
-[Ultra Disks](disks-types.md#ultra-disk): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/ultra-disk-storage-for-hpc-and-gpu-vms/ba-p/2189312) about availability, usage and performance) <br>
+[Ultra Disks](disks-types.md#ultra-disk): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/ultra-disk-storage-for-hpc-and-gpu-vms/ba-p/2189312) about availability, usage, and performance) <br>
[Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
virtual-machines Nvv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nvv4-series.md
# NVv4-series
-The NVv4-series virtual machines are powered by [AMD Radeon Instinct MI25](https://www.amd.com/en/products/professional-graphics/instinct-mi25) GPUs and AMD EPYC 7V12(Rome) CPUs. With NVv4-series Azure is introducing virtual machines with partial GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/8th of a GPU with 2 GiB frame buffer to a full GPU with 16 GiB frame buffer. NVv4 virtual machines currently support only Windows guest operating system.
+The NVv4-series virtual machines are powered by [AMD Radeon Instinct MI25](https://www.amd.com/en/products/professional-graphics/instinct-mi25) GPUs and AMD EPYC 7V12(Rome) CPUs with a base frequency of 2.45GHz, all-cores peak frequency of 3.1GHz and single-core peak frequency of 3.3GHz. With NVv4-series Azure is introducing virtual machines with partial GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/8th of a GPU with 2 GiB frame buffer to a full GPU with 16 GiB frame buffer. NVv4 virtual machines currently support only Windows guest operating system.
<br>
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/premium-storage-performance.md
Title: 'Azure Premium Storage: Design for high performance' description: Design high-performance applications using Azure premium SSD managed disks. Premium Storage offers high-performance, low-latency disk support for I/O-intensive workloads running on Azure Virtual Machines. -+ Previously updated : 10/05/2020 Last updated : 06/29/2021
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-b-series-burstable.md
The B-series comes in the following VM sizes:
<br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Base CPU Perf of VM | Max CPU Perf of VM | Initial Credits | Credits banked/hour | Max Banked Credits | Max data disks | Max cached and temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max NICs |
-||||||||||||||
-| Standard_B1ls<sup>1</sup> | 1 | 0.5 | 4 | 5% | 100% | 30 | 3 | 72 | 2 | 200/10 | 160/10 | 2 |
-| Standard_B1s