Updates from: 03/17/2021 04:06:37
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
The following diagram depicts the components you'll configure in your Azure AD a
![Resource group projection](./media/azure-monitor/resource-group-projection.png)
-During this deployment, you'll configure both your Azure AD B2C tenant and Azure AD tenant where the Log Analytics workspace will be hosted. The account used to run the deployment must be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#limit-use-of-global-administrator) role in both of these tenants. It's also important to make sure you're signed in to the correct directory as you complete each step as described.
+During this deployment, you'll configure both your Azure AD B2C tenant and Azure AD tenant where the Log Analytics workspace will be hosted. The Azure AD B2C account should be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#limit-use-of-global-administrator) role on the Azure AD B2C tenant. The Azure AD account used to run the deployment must be assigned the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Azure AD subscription.It's also important to make sure you're signed in to the correct directory as you complete each step as described.
## 1. Create or choose resource group
Next, you'll create an Azure Resource Manager template that grants Azure AD B2C
2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your **Azure AD** tenant. 3. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](../lighthouse/how-to/onboard-customer.md#create-an-azure-resource-manager-template).
- [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Lighthouse-samples%2Fmaster%2Ftemplates%2Frg-delegated-resource-management%2FrgDelegatedResourceManagement.json)
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)]( https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-ad-b2c%2Fsiem%2Fmaster%2Ftemplates%2FrgDelegatedResourceManagement.json)
5. On the **Custom deployment** page, enter the following information:
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Previously updated : 03/15/2021 Last updated : 03/16/2021 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] + This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*. ![Custom domain user experience](./media/custom-domain/custom-domain-user-experience.png)
Copy the URL, change the domain name manually, and then paste it back to your br
Azure Front Door passes the user's original IP address. This is the IP address that you'll see in the audit reporting or your custom policy.
-### Can I use a third-party wab application firewall (WAF) with B2C?
+### Can I use a third-party web application firewall (WAF) with B2C?
Currently, Azure AD B2C supports a custom domain through the use of Azure Front Door only. Don't add another WAF in front of Azure Front Door.
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui-with-html.md
Previously updated : 01/28/2021 Last updated : 03/16/2021
https://contoso.blob.core.windows.net/fr/myHTML/unified.html
You can find sample templates for UI customization here: ```bash
-git clone https://github.com/Azure-Samples/Azure-AD-B2C-page-templates
+git clone https://github.com/azure-ad-b2c/html-templates
``` This project contains the following templates:-- [Ocean Blue](https://github.com/Azure-Samples/Azure-AD-B2C-page-templates/tree/master/ocean_blue)-- [Slate Gray](https://github.com/Azure-Samples/Azure-AD-B2C-page-templates/tree/master/slate_gray)
+- [Ocean Blue](https://github.com/azure-ad-b2c/html-templates/tree/main/templates/AzureBlue)
+- [Slate Gray](https://github.com/azure-ad-b2c/html-templates/tree/main/templates/MSA)
+- [Classic](https://github.com/azure-ad-b2c/html-templates/tree/main/templates/classic)
+- [Template resources](https://github.com/azure-ad-b2c/html-templates/tree/main/templates/src)
To use the sample:
-1. Clone the repo on your local machine. Choose a template folder `/ocean_blue` or `/slate_gray`.
-1. Upload all the files under the template folder and the `/assets` folder, to Blob storage as described in the previous sections.
-1. Next, open each `\*.html` file in the root of either `/ocean_blue` or `/slate_gray`, replace all instances of relative URLs with the URLs of the css, images, and fonts files you uploaded in step 2. For example:
+1. Clone the repo on your local machine. Choose a template folder `/AzureBlue`, `/MSA`, or `/classic`.
+1. Upload all the files under the template folder and the `/src` folder, to Blob storage as described in the previous sections.
+1. Next, open each `\*.html` file in the template folder. Then replace all instances of `https://login.microsoftonline.com` URLs, with the URL you uploaded in step 2. For example:
+
+ From:
```html
- <link href="./css/assets.css" rel="stylesheet" type="text/css" />
+ https://login.microsoftonline.com/templates/src/fonts/segoeui.WOFF
```
- To
+ To:
```html
- <link href="https://your-storage-account.blob.core.windows.net/your-container/css/assets.css" rel="stylesheet" type="text/css" />
+ https://your-storage-account.blob.core.windows.net/your-container/templates/src/fonts/segoeui.WOFF
```
+
1. Save the `\*.html` files and upload them to Blob storage. 1. Now modify the policy, pointing to your HTML file, as mentioned previously. 1. If you see missing fonts, images, or CSS, check your references in the extensions policy and the \*.html files.
To use [company branding](customize-ui.md#configure-company-branding) assets in
## Next steps
-Learn how to enable [client-side JavaScript code](javascript-and-page-layout.md).
+Learn how to enable [client-side JavaScript code](javascript-and-page-layout.md).
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
Previously updated : 03/15/2021 Last updated : 03/16/2021
For a simpler sign-in experience, you can avoid redirecting users to a separate sign-in page or generating a pop-up window. By using the inline frame element `<iframe>`, you can embed the Azure AD B2C sign-in user interface directly into your web application. + ## Web application embedded sign-in The inline frame element `<iframe>` is used to embed a document in an HTML5 web page. You can use the iframe element to embed the Azure AD B2C sign-in user interface directly into your web application, as show in the following example:
The inline frame element `<iframe>` is used to embed a document in an HTML5 web
When using iframe, consider the following: - Embedded sign-in supports local accounts only. Most social identity providers (for example, Google and Facebook) block their sign-in pages from being rendered in inline frames.-- Because Azure AD B2C session cookies within an iframe are considered third-party cookies, certain browsers (for example Safari or Chrome in incognito mode) either block or clear these cookies, resulting in an undesirable user experience. To prevent this issue, make sure your application domain name and your Azure AD B2C domain have the *same origin*. For example, an application hosted on https://app.contoso.com has the same origin as Azure AD B2C running on https://login.contoso.com.
-
+- Because Azure AD B2C session cookies within an iframe are considered third-party cookies, certain browsers (for example Safari or Chrome in incognito mode) either block or clear these cookies, resulting in an undesirable user experience. To prevent this issue, make sure your application domain name and your Azure AD B2C domain have the *same origin*. To use the same origin, [enable custom domains](custom-domain.md) for Azure AD B2C tenant, then configure your web app with the same origin. For example, an application hosted on https://app.contoso.com has the same origin as Azure AD B2C running on https://login.contoso.com.
+
+## Perquisites
+
+* Complete the steps in the [Get started with custom policies in Active Directory B2C](custom-policy-get-started.md).
+* [Enable custom domains](custom-domain.md) for your policies.
+ ## Configure your policy To allow your Azure AD B2C user interface to be embedded in an iframe, a content security policy `Content-Security-Policy` and frame options `X-Frame-Options` must be included in the Azure AD B2C HTTP response headers. These headers allow the Azure AD B2C user interface to run under your application domain name.
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-data-residency.md
Title: Azure AD Multifactor Authentication data residency
-description: Learn what personal and organizational data Azure AD Multifactor Authentication stores about you and your users and what data remains within the country/region of origin.
+ Title: Azure AD multifactor authentication data residency
+description: Learn what personal and organizational data Azure AD multifactor authentication stores about you and your users and what data remains within the country/region of origin.
-# Data residency and customer data for Azure AD Multifactor Authentication
+# Data residency and customer data for Azure AD multifactor authentication
-Customer data is stored by Azure AD in a geographical location based on the address provided by your organization when subscribing for a Microsoft Online service such as Microsoft 365 and Azure. For information on where your customer data is stored, you can use the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
+Azure Active Directory (Azure AD) stores customer data in a geographical location based on the address an organization provides when subscribing to a Microsoft online service such as Microsoft 365 or Azure. For information on where your customer data is stored, see [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) in the Microsoft Trust Center.
-Cloud-based Azure AD Multifactor Authentication and Azure AD Multifactor Authentication Server process and store some amount of personal data and organizational data. This article outlines what and where data is stored.
+Cloud-based Azure AD multifactor authentication and Azure Multifactor Authentication Server process and store personal data and organizational data. This article outlines what and where data is stored.
-The Azure AD Multifactor Authentication service has datacenters in the US, Europe, and Asia Pacific. The following activities originate out of the regional datacenters except where noted:
+The Azure AD multifactor authentication service has datacenters in the United States, Europe, and Asia Pacific. The following activities originate from the regional datacenters except where noted:
-* Multifactor authentication using phone calls originate from US datacenters and are routed by global providers.
-* General purpose user authentication requests from other regions such as Europe or Australia are currently processed based on the user's location.
-* Push notifications using the Microsoft Authenticator app are currently processed in the regional datacenters based on the user's location.
- * Device vendor-specific services, such as Apple Push Notifications, may be outside the user's location.
+* Multifactor authentication phone calls originate from United States datacenters and are routed by global providers.
+* General purpose user authentication requests from other regions are currently processed based on the user's location.
+* Push notifications that use the Microsoft Authenticator app are currently processed in regional datacenters based on the user's location. Vendor-specific device services, such as Apple Push Notification Service, might be outside the user's location.
-## Personal data stored by Azure AD Multifactor Authentication
+## Personal data stored by Azure AD multifactor authentication
-Personal data is user-level information associated with a specific person. The following data stores contain personal information:
+Personal data is user-level information that's associated with a specific person. The following data stores contain personal information:
* Blocked users * Bypassed users * Microsoft Authenticator device token change requests
-* Multifactor Authentication activity reports
+* Multifactor authentication activity reports
* Microsoft Authenticator activations This information is retained for 90 days.
-Azure AD Multifactor Authentication doesn't log personal data such as username, phone number, or IP address, but there is a *UserObjectId* that identifies Multifactor Authentication attempts to users. Log data is stored for 30 days.
+Azure AD multifactor authentication doesn't log personal data such as usernames, phone numbers, or IP addresses. However, *UserObjectId* identifies authentication attempts to users. Log data is stored for 30 days.
-### Azure AD Multifactor Authentication
+### Data stored by Azure AD multifactor authentication
-For Azure public clouds, excluding Azure B2C authentication, NPS Extension, and Windows Server 2016 or 2019 AD FS Adapter, the following personal data is stored:
+For Azure public clouds, excluding Azure AD B2C authentication, the NPS Extension, and the Windows Server 2016 or 2019 Active Directory Federation Services (AD FS) adapter, the following personal data is stored:
| Event type | Data store type | |--|--|
-| OATH token | In Multifactor Authentication logs |
-| One-way SMS | In Multifactor Authentication logs |
-| Voice call | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported |
-| Microsoft Authenticator notification | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
+| OATH token | Multifactor authentication logs |
+| One-way SMS | Multifactor authentication logs |
+| Voice call | Multifactor authentication logs<br/>Multifactor authentication activity report data store<br/>Blocked users (if fraud was reported) |
+| Microsoft Authenticator notification | Multifactor authentication logs<br/>Multifactor authentication activity report data store<br/>Blocked users (if fraud was reported)<br/>Change requests when the Microsoft Authenticator device token changes |
-For Microsoft Azure Government, Microsoft Azure Germany, Microsoft Azure Operated by 21Vianet, Azure B2C authentication, NPS Extension, and Windows Server 2016 or 2019 AD FS Adapter, the following personal data is stored:
+For Microsoft Azure Government, Microsoft Azure Germany, Microsoft Azure operated by 21Vianet, Azure AD B2C authentication, the NPS extension, and the Windows Server 2016 or 2019 AD FS adapter, the following personal data is stored:
| Event type | Data store type | |--|--|
-| OATH token | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
-| One-way SMS | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
-| Voice call | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported |
-| Microsoft Authenticator notification | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
+| OATH token | Multifactor authentication logs<br/>Multifactor authentication activity report data store |
+| One-way SMS | Multifactor authentication logs<br/>Multifactor authentication activity report data store |
+| Voice call | Multifactor authentication logs<br/>Multifactor authentication activity report data store<br/>Blocked users (if fraud was reported) |
+| Microsoft Authenticator notification | Multifactor authentication logs<br/>Multifactor authentication activity report data store<br/>Blocked users (if fraud was reported)<br/>Change requests when the Microsoft Authenticator device token changes |
-### Multifactor Authentication Server
+### Data stored by Azure Multifactor Authentication Server
-If you deploy and run Azure AD Multifactor Authentication Server, the following personal data is stored:
+If you use Azure Multifactor Authentication Server, the following personal data is stored.
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft will no longer offer Multifactor Authentication Server for new deployments. New customers who would like to require multifactor authentication from their users should use cloud-based Azure AD Multifactor Authentication. Existing customers who have activated Multifactor Authentication Server prior to July 1 will be able to download the latest version, future updates and generate activation credentials as usual.
+> As of July 1, 2019, Microsoft no longer offers Multifactor Authentication Server for new deployments. New customers who want to require multifactor authentication from their users should use cloud-based Azure AD multifactor authentication. Existing customers who activated Multifactor Authentication Server before July 1, 2019, can download the latest version and updates, and generate activation credentials as usual.
| Event type | Data store type | |--|--|
-| OATH token | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
-| One-way SMS | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
-| Voice call | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported |
-| Microsoft Authenticator notification | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
+| OATH token | Multifactor authentication logs<br />Multifactor authentication activity report data store |
+| One-way SMS | Multifactor authentication logs<br />Multifactor authentication activity report data store |
+| Voice call | Multifactor authentication logs<br />Multifactor authentication activity report data store<br />Blocked users (if fraud was reported) |
+| Microsoft Authenticator notification | Multifactor authentication logs<br />Multifactor authentication activity report data store<br />Blocked users (if fraud was reported)<br />Change requests when Microsoft Authenticator device token changes |
-## Organizational data stored by Azure AD Multifactor Authentication
+## Organizational data stored by Azure AD multifactor authentication
-Organizational data is tenant-level information that could expose configuration or environment setup. Tenant settings from the following Azure portal Multifactor Authentication pages may store organizational data such as lockout thresholds or caller ID information for incoming phone authentication requests:
+Organizational data is tenant-level information that can expose configuration or environment setup. Tenant settings from the following Azure portal multifactor authentication pages might store organizational data such as lockout thresholds or caller ID information for incoming phone authentication requests:
* Account lockout * Fraud alert * Notifications * Phone call settings
-And for Azure AD Multifactor Authentication Server, the following Azure portal pages may contain organizational data:
+For Azure Multifactor Authentication Server, the following Azure portal pages might contain organizational data:
* Server settings * One-time bypass
And for Azure AD Multifactor Authentication Server, the following Azure portal p
The following table shows the location for service logs for public clouds.
-| Public cloud| Sign-in logs | Multifactor Authentication activity report | Multifactor Authentication service logs |
+| Public cloud| Sign-in logs | Multifactor authentication activity report | Multifactor authentication service logs |
|-|--|-||
-| US | US | US | US |
-| Europe | Europe | US | Europe <sup>2</sup> |
-| Australia | Australia | US<sup>1</sup> | Australia <sup>2</sup> |
+| United States | United States | United States | United States |
+| Europe | Europe | United States | Europe <sup>2</sup> |
+| Australia | Australia | United States<sup>1</sup> | Australia <sup>2</sup> |
-<sup>1</sup>OATH Code logs are stored in Australia
+<sup>1</sup>OATH Code logs are stored in Australia.
-<sup>2</sup>Voice calls multifactor authentication service logs are stored in the US
+<sup>2</sup>Voice calls multifactor authentication service logs are stored in the United States.
The following table shows the location for service logs for sovereign clouds. | Sovereign cloud | Sign-in logs | Multifactor authentication activity report (includes personal data)| Multifactor authentication service logs | |--|--|-||
-| Microsoft Azure Germany | Germany | US | US |
-| Microsoft Azure Operated by 21Vianet | China | US | US |
-| Microsoft Government Cloud | US | US | US |
+| Microsoft Azure Germany | Germany | United States | United States |
+| Azure China 21Vianet | China | United States | United States |
+| Microsoft Government Cloud | United States | United States | United States |
-The Multifactor Authentication activity report data contain personal data such as user principal name (UPN) and complete phone number.
+The multifactor authentication activity reports contain personal data such as User Principal Name (UPN) and complete phone number.
-The Multifactor Authentication service logs are used to operate the service.
+The multifactor authentication service logs are used to operate the service.
## Next steps
-For more information about what user information is collected by cloud-based Azure AD Multifactor Authentication and Azure AD Multifactor Authentication Server, see [Azure AD Multifactor Authentication user data collection](howto-mfa-reporting-datacollection.md).
+For more information about what user information is collected by cloud-based Azure AD multifactor authentication and Azure Multifactor Authentication Server, see [Azure AD multifactor authentication user data collection](howto-mfa-reporting-datacollection.md).
active-directory Troubleshoot Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/troubleshoot-sspr.md
Azure Active Directory (Azure AD) self-service password reset (SSPR) lets users reset their passwords in the cloud.
-If you have problems with SSPR, the following troubleshooting steps and common errors may help. If you can't find the answer to your problem, [our support teams are always available](#contact-microsoft-support) to assist you further.
+If you have problems with SSPR, the following troubleshooting steps and common errors may help. You can also watch this short video on the [How ot resolve the six most common SSPR end-user error messages](https://www.youtube.com/watch?v=9RPrNVLzT8I&list=PL3ZTgFEc7LyuS8615yo39LtXR7j1GCerW&index=1).
+
+If you can't find the answer to your problem, [our support teams are always available](#contact-microsoft-support) to assist you further.
## SSPR configuration in the Azure portal
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-saml-claims-customization.md
Select the desired source for the `NameIdentifier` (or NameID) claim. You can se
||-| | Email | Email address of the user | | userprincipalName | User principal name (UPN) of the user |
-| onpremisessamaccount | SAM account name that has been synced from on-premises Azure AD |
+| onpremisessamaccountname | SAM account name that has been synced from on-premises Azure AD |
| objectid | Objectid of the user in Azure AD | | employeeid | Employee ID of the user | | Directory extensions | Directory extensions [synced from on-premises Active Directory using Azure AD Connect Sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md) |
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50001 | InvalidResource - The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access. | | AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. | | AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that is not in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
-| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. Check out the resolutions outlined at [../manage-apps/application-sign-in-problem-federated-sso-gallery.md#certificate-or-key-not-configured](../manage-apps/application-sign-in-problem-federated-sso-gallery.md#certificate-or-key-not-configured). If you still see issues, contact the app owner or an app admin. |
+| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. To learn more, see the troubleshooting article for error [AADSTS50003](/troubleshoot/azure/active-directory/error-code-aadsts50003-cert-or-key-not-configured). If you still see issues, contact the app owner or an app admin. |
| AADSTS50005 | DevicePolicyError - User tried to log in to a device from a platform that's currently not supported through Conditional Access policy. | | AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. | | AADSTS50007 | PartnerEncryptionCertificateMissing - The partner encryption certificate was not found for this app. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Microsoft to get this fixed. | | AADSTS50008 | InvalidSamlToken - SAML assertion is missing or misconfigured in the token. Contact your federation provider. | | AADSTS50010 | AudienceUriValidationFailed - Audience URI validation for the app failed since no token audiences were configured. |
-| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or does not match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you.|
+| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or does not match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).|
| AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate is not authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain is not valid</li><li>The signing certificate is not valid</li><li>Policy is not configured on the tenant</li><li>Thumbprint of the signing certificate is not authorized</li><li>Client assertion contains an invalid signature</li></ul> | | AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion is not a primary refresh token. | | AADSTS50014 | GuestUserInPendingState - The user's redemption is in a pending state. The guest user account is not fully created yet. |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50089 | Flow token expired - Authentication Failed. Have the user try signing-in again with username -password. | | AADSTS50097 | DeviceAuthenticationRequired - Device authentication is required. | | AADSTS50099 | PKeyAuthInvalidJwtUnauthorized - The JWT signature is invalid. |
-| AADSTS50105 | EntitlementGrantsNotFound - The signed in user is not assigned to a role for the signed in app. Assign the user to the app. For more information:[../manage-apps/application-sign-in-problem-federated-sso-gallery.md#user-not-assigned-a-role](../manage-apps/application-sign-in-problem-federated-sso-gallery.md#user-not-assigned-a-role). |
+| AADSTS50105 | EntitlementGrantsNotFound - The signed in user is not assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). |
| AADSTS50107 | InvalidRealmUri - The requested federation realm object does not exist. Contact the tenant admin. | | AADSTS50120 | ThresholdJwtInvalidJwtFormat - Issue with JWT header. Contact the tenant admin. | | AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS54000 | MinorUserBlockedLegalAgeGroupRule | | AADSTS65001 | DelegationDoesNotExist - The user or administrator has not consented to use the application with ID X. Send an interactive authorization request for this user and resource. | | AADSTS65004 | UserDeclinedConsent - User declined to consent to access the app. Have the user retry the sign-in and consent to the app|
-| AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). Try out the resolution listed for SAML using the link below: [https://docs.microsoft.com/azure/active-directory/application-sign-in-problem-federated-sso-gallery#no-resource-in-requiredresourceaccess-list](../manage-apps/application-sign-in-problem-federated-sso-gallery.md?/?WT.mc_id=DMC_AAD_Manage_Apps_Troubleshooting_Nav) |
+| AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). |
| AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. | | AADSTS67003 | ActorNotValidServiceIdentity | | AADSTS70000 | InvalidGrant - Authentication failed. The refresh token is not valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
-| AADSTS70001 | UnauthorizedClient - The application is disabled. |
+| AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). |
| AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#request-an-access-token). | | AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. | | AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS70019 | CodeExpired - Verification code expired. Have the user retry the sign-in. | | AADSTS75001 | BindingSerializationError - An error occurred during SAML message binding. | | AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response cannot be sent via bindings other than HTTP POST). |
-| AADSTS75005 | Saml2MessageInvalid - Azure AD doesnΓÇÖt support the SAML request sent by the app for SSO. |
+| AADSTS75005 | Saml2MessageInvalid - Azure AD doesnΓÇÖt support the SAML request sent by the app for SSO. To learn more, see the troubleshooting article for error [AADSTS75005](/troubleshoot/azure/active-directory/error-code-aadsts75005-not-a-valid-saml-request). |
| AADSTS7500514 | A supported type of SAML response was not found. The supported response types are 'Response' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:protocol') or 'Assertion' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:assertion'). Application error - the developer will handle this error.|
+| AADSTS750054 | SAMLRequest or SAMLResponse must be present as query string parameters in HTTP request for SAML Redirect binding. To learn more, see the troubleshooting article for error [AADSTS750054](/troubleshoot/azure/active-directory/error-code-aadsts750054-saml-request-not-present). |
| AADSTS75008 | RequestDeniedError - The request from the app was denied since the SAML request had an unexpected destination. |
-| AADSTS75011 | NoMatchedAuthnContextInOutputClaims - The authentication method by which the user authenticated with the service doesn't match requested authentication method. |
+| AADSTS75011 | NoMatchedAuthnContextInOutputClaims - The authentication method by which the user authenticated with the service doesn't match requested authentication method. To learn more, see the troubleshooting article for error [AADSTS75011](/troubleshoot/azure/active-directory/error-code-aadsts75011-auth-method-mismatch). |
| AADSTS75016 | Saml2AuthenticationRequestInvalidNameIDPolicy - SAML2 Authentication Request has invalid NameIdPolicy. | | AADSTS80001 | OnPremiseStoreIsNotAvailable - The Authentication Agent is unable to connect to Active Directory. Make sure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they are able to connect to Active Directory. | | AADSTS80002 | OnPremisePasswordValidatorRequestTimedout - Password validation request timed out. Make sure that Active Directory is available and responding to requests from the agents. |
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
If you develop with Python, try the following quickstart:
You add authentication to your web app so that it can sign in users. Adding authentication enables your web app to access limited profile information in order to customize the experience for users.
-Web apps authenticate a user in a web browser. In this scenario, the web app directs the user's browser to sign them in to Azure Active Directory (Azure AD). Azure AD returns a sign-in response through the user's browser, which contains claims about the user in a security token. Signing in users takes advantage of the [Open ID Connect](./v2-protocols-oidc.md) standard protocol, simplified by the use of middleware [libraries](scenario-web-app-sign-user-app-configuration.md#microsoft libraries supporting web apps).
+Web apps authenticate a user in a web browser. In this scenario, the web app directs the user's browser to sign them in to Azure Active Directory (Azure AD). Azure AD returns a sign-in response through the user's browser, which contains claims about the user in a security token. Signing in users takes advantage of the [Open ID Connect](./v2-protocols-oidc.md) standard protocol, simplified by the use of middleware [libraries](scenario-web-app-sign-user-app-configuration.md#microsoft-libraries-supporting-web-apps).
![Web app signs in users](./media/scenario-webapp/scenario-webapp-signs-in-users.svg)
Move on to the next article in this scenario,
Move on to the next article in this scenario, [App registration](./scenario-web-app-sign-user-app-registration.md?tabs=python). -+
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
This option is a premium edition capability available through products such as A
> [!NOTE] > **Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting does not apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-in-for-windows-vm-in-azure) and Azure AD joined devices using [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
+> [!IMPORTANT]
+> - We recommend using ["Register or join devices" user action](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) in Conditional Access for enforcing multi-factor authentication for joining or registering a device.
+> - You must set this setting to **No** if you are using Conditional Access policy to require multi-factor authencation.
+ - **Maximum number of devices** - This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If a user reaches this quota, they are not be able to add additional devices until one or more of the existing devices are removed. The default value is **50**. > [!NOTE]
active-directory How To Connect Health Ad Fs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-ad-fs-sign-in.md
+
+ Title: AD FS sign-ins in Azure AD with Connect Health | Microsoft Docs
+description: This document describes how to integrate AD FS sign-ins with the Azure AD Connect Health sign-ins report.
+
+documentationcenter: ''
+++++
+ na
+ms.devlang: na
+ Last updated : 03/16/2021++++
+# AD FS sign-ins in Azure AD with Connect Health - preview
+
+AD FS sign-ins can now be integrated into the Azure Active Directory sign-ins report by using Connect Health. The [Azure AD sign-ins Report](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-all-sign-ins#:~:text=Interactive%20user%20sign-ins%20are%20sign-ins%20where%20a%20user,to%20Azure%20AD%20or%20to%20a%20helper%20app.) report includes information about when users, applications, and managed resources sign in to Azure AD and access resources.
+
+The Connect Health for AD FS agent correlates multiple Event IDs from AD FS, dependent on the server version, to provide information about the request and error details if the request fails. This information is correlated to the Azure AD sign-ins report schema and displayed in the Azure AD Sign-In Report UX. Alongside the report, a new Log Analytics stream is available with the AD FS data and a new Azure Monitor Workbook template. The template can be used and modified for an in-depth analysis for scenarios such as AD FS account lockouts, bad password attempts, and spikes of unexpected sign-in attempts.
+
+## Prerequisites
+* Azure AD Connect Health for AD FS installed and upgraded to latest version.
+* Global administrator or reports reader role to view the Azure AD sign-ins
+
+## What data is displayed in the report?
+The data available mirrors the same data available for Azure AD sign-ins. Five tabs with information will be available based on the type of sign-in, either Azure AD or AD FS. Connect Health correlates events from AD FS, dependent on the server version, and matches them to the AD FS schema.
+++
+#### User sign-ins
+Each tab in the sign-ins blade shows the default values below:
+* Sign-in date
+* Request ID
+* User name or user ID
+* Status of the sign-in
+* IP Address of the device used for the sign-in
+* Sign-In Identifier
+
+#### Authentication Method Information
+The following values may be displayed in the authentication tab. The authentication method is taken from the AD FS audit logs.
+
+|Authentication Method|Description|
+|--|--|
+|Forms|Username/password authentication|
+|Windows|Windows-integrated Authentication|
+|Certificate|Authentication with SmartCard / VirtualSmart certificates|
+|WindowsHelloForBusiness|This field is for auth with Windows Hello for Business. (Microsoft Passport Authentication)|
+|Device | Displayed if Device Authentication is selected as ΓÇ£PrimaryΓÇ¥ Authentication from intranet/extranet and Device Authentication is performed. There is no separate user authentication in this scenario.|
+|Federated|AD FS did not do the authentication but sent it to a third party identity provider|
+|SSO |If a single-sign-on token was used, this field will display. If the SSO has an MFA, it will show as Multifactor|
+|Multifactor|If a single sign-on token has an MFA and that was used for authentication, this field will display as Multifactor|
+|Azure MFA|Azure MFA is selected as the Additional Authentication Provider in AD FS and was used for authentication|
+|ADFSExternalAuthenticationProvider|This field is if a third-party authentication provider was registered and used for authentication|
++
+#### AD FS Additional Details
+The following details are available for AD FS sign-ins:
+* Server Name
+* IP Chain
+* Protocol
+
+### Enabling Log Analytics and Azure Monitor
+Log Analytics can be enabled for the AD FS sign-ins and can be used with any other Log Analytics integrated components, such as Sentinel.
+
+> [!NOTE]
+> AD FS sign-ins may increase Log Analytics cost significantly, depending on the amount of sign-ins to AD FS in your organization. To enable and disable Log Analytics, select the checkbox for the stream.
+
+To enable Log Analytics for the feature, navigate to the Log Analytics blade and select "ADFSSignIns" stream. This selection will allow AD FS sign-ins to flow into Log Analytics.
+
+To access the updated Azure Monitor Workbook template, navigate to "Azure Monitor Templates", and select the "sign-ins" Workbook.
+For more information about Workbooks, visit [Azure Monitor Workbooks](https://aka.ms/adfssigninspreview).
++++
+### Frequently Asked Questions
+***What are the types of sign-ins that I may see?***
+The sign-in report supports sign-ins through O-Auth, WS-Fed, SAML, and WS-Trust protocols.
+
+***How are different types of sign-ins shown in the sign-in report?***
+If a Seamless SSO sign-in is performed, there will be one row for the sign-in with one correlation ID.
+If a single factor authentication is performed, two rows will be populated with the same correlation ID, but with two different authentication methods (i.e. Forms, SSO).
+In cases of Multi Factor Authentication, there will be three rows with a shared correlation ID and three corresponding Authentication Methods (i.e. Forms, AzureMFA, Multifactor). In this particular example, the multifactor in this case shows that the SSO has an MFA.
+
+***What are the errors that I can see in the report?***
+For a full list of AD FS related errors that are populated in the Sign-In report and descriptions, visit [AD FS Help Error Code Reference](https://adfshelp.microsoft.com/References/ConnectHealthErrorCodeReference)
+
+***I am seeing ΓÇ£00000000-0000-0000-0000-000000000000ΓÇ¥ in the ΓÇ£UserΓÇ¥ section of a sign-in. What does that
+mean?***
+If the sign-in failed and the attempted UPN does not match an existing UPN, the ΓÇ£UserΓÇ¥, ΓÇ£UsernameΓÇ¥, and ΓÇ£User IDΓÇ¥
+fields will be ΓÇ£00000000-0000-0000-0000-000000000000ΓÇ¥ and the ΓÇ£Sign-in IdentifierΓÇ¥ will be populated with
+the attempted value the user entered. In these cases, the user attempting to sign-in does not exist.
+
+***How can I correlate my on-premises events to the Azure AD sign-ins report?***
+The Azure AD Connect Health agent for AD FS correlates event IDs from AD FS dependent on server version. The events will be available on the Security Log of the AD FS servers.
+
+***Why do I see NotSet or NotApplicable in the Application ID/Name for some AD FS sign-ins?***
+The AD FS Sign-In Report will display OAuth Ids in the Application ID field for OAuth sign-ins. In the WS-Fed, WS-Trust sign-in scenarios, the application ID will be NotSet or NotApplicable and the Resource IDs and Relying Party identifiers will be present in the Resource ID field.
+
+***Are there any more known issues with the report in preview?***
+The report has a known issue where the "Authentication Requirement" field in the "Basic Info" tab will be populated as a single factor authentication value for AD FS sign-ins regardless of the sign-in. Additionally, the Authentication Details tab will display "Primary or Secondary" under the Requirement field, with a fix in progress to differentiate Primary or Secondary authentication types.
++
+## Related links
+* [Azure AD Connect Health](./whatis-azure-ad-connect.md)
+* [Azure AD Connect Health Agent Installation](how-to-connect-health-agent-install.md)
+* [Risky IP report](how-to-connect-health-adfs-risky-ip.md)
+++++
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
+
+ Title: 'Selective Password Hash Synchronization for Azure AD Connect'
+description: This article describes how to setup and configure selective password hash synchronization to use with Azure AD Connect.
++++++ Last updated : 03/16/2021++++++
+# Selective password hash synchronization configuration for Azure AD Connect
+
+[Password hash synchronization](whatis-phs.md) is one of the sign-in methods used to accomplish hybrid identity. Azure AD Connect synchronizes a hash, of the hash, of a user's password from an on-premises Active Directory instance to a cloud-based Azure AD instance. By default, once it has been setup, password hash synchronization will occur on all of the users you are synchronizing.
+
+If youΓÇÖd like to have a subset of users excluded from synchronizing their password hash to Azure AD, you can configure selective password hash synchronization using the guided steps provided in this article.
+
+>[!Important]
+> Microsoft doesn't support modifying or operating Azure AD Connect sync outside of the configurations or actions that are formally documented. Any of these configurations or actions might result in an inconsistent or unsupported state of Azure AD Connect sync. As a result, Microsoft cannot guarantee that we will be able to provide efficient technical support for such deployments.
++
+## Consider your implementation
+To reduce the configuration administrative effort, you should first consider the number of user objects you wish to exclude from password hash synchronization. Verify which of the scenarios below, which are mutually exclusive, aligns with your requirements to select the right configuration option for you.
+- If the number of users to **exclude** is **smaller** than the number of users to **include**, follow the steps in this [section](#excluded-users-is-smaller-than-included-users).
+- If the number of users to **exclude** is **greater** than the number of users to **include**, follow the steps in this [section](#excluded-users-is-larger-than-included-users).
+
+> [!Important]
+> With either configuration option chosen, a required initial sync (Full Sync) to apply the changes, will be performed automatically over the next sync cycle.
+
+### The adminDescription attribute
+Both scenarios rely on setting the adminDescription attribute of users to a specific value. This allows the the rules to be applied and is what makes selective PHS work.
+
+|Scenario|adminDescription value|
+|--|--|
+|Excluded users is smaller than included users|PHSFiltered|
+|Excluded users is larger than included users|PHSIncluded|
+
+This attribute can be set either:
+
+- using the Active Directory Users and Computers UI
+- using `Set-ADUser` PowerShell cmdlet. For more information see [Set-ADUser](https://docs.microsoft.com/powershell/module/addsadministration/set-aduser).
+
+
++
+### Disable the synchronization scheduler:
+Before you start either scenario, you must disable the synchronization scheduler while making changes to the sync rules.
+ 1. Start windows PowerShell enter.
+
+ ```Set-ADSyncScheduler -SyncCycleEnabled $false```
+
+2. Confirm the scheduler is disabled by running the following cmdlet:
+
+ ```Get-ADSyncScheduler```
+
+For more information on the scheduler see [Azure AD Connect sync scheduler](how-to-connect-sync-feature-scheduler.md).
++++
+## Excluded users is smaller than included users
+The following section describes how to enable selective password hash synchronization when the number of users to **exclude** is **smaller** than the number of users to **include**.
+
+>[!Important]
+> Before you proceed ensure the synchronization scheduler is disabled as outlined above.
+
+- Create an editable copy of the **In from AD ΓÇô User AccountEnabled** with the option to **enable password hash sync un-selected** and define its scoping filter
+- Create another editable copy of the default **In from AD ΓÇô User AccountEnabled** with the option to **enable password hash sync selected** and define its scoping filter
+- Re-enable the synchronization scheduler
+- Set the attribute value, in active directory, that was defined as scoping attribute on the users you want to allow in password hash synchronization.
+
+>[!Important]
+>The steps provided to configure selective password hash synchronization will only effect user objects that have
+the attribute **adminDescription** populated in Active Directory with the value of **PHSFiltered**.
+>If this attribute is not populated or the value is something other than **PHSFiltered** then these rules will not be applied to the user objects.
++
+### Configure the necessary synchronization rules:
+
+ 1. Start the Synchronization Rules Editor and set the filters **Password Sync** to **On** and **Rule Type** to **Standard**.
+ ![Start sync rules editor](media/how-to-connect-selective-password-hash-synchronization/exclude-1.png)
+ 2. Select the rule **In from AD ΓÇô User AccountEnabled** for the Active Directory forest Connector you want to configure selective password had hash synchronization on and click **Edit**. Select **Yes** in the next dialog box to create an editable copy of the original rule.
+ ![Select rule](media/how-to-connect-selective-password-hash-synchronization/exclude-2.png)
+ 3. The first rule will disable password hash sync.
+ Provide the following name to the new custom rule: **In from AD - User AccountEnabled - Filter Users from PHS**.
+ Change the precedence value to a number lower than 100 (for example **90** or whichever is the lowest value available in your environment).
+ Make sure the checkboxes **Enable Password Sync** and **Disabled** are unchecked and c.
+ Click **Next**.
+ ![Edit inbound](media/how-to-connect-selective-password-hash-synchronization/exclude-3.png)
+ 4. In **Scoping filter**, click **Add clause**.
+ Select **adminDescription** in the attribute column, **EQUAL** in the Operator column and enter **PHSFiltered** as the value.
+ ![Scoping filter](media/how-to-connect-selective-password-hash-synchronization/exclude-4.png)
+ 5. No further changes are required. **Join rules** and **Transformations** should be left with the default copied settings so you can click **Save** now.
+ Click **OK** in the warning dialog box informing a full synchronization will be run on the next synchronization cycle of the connector.
+ ![Save rule](media/how-to-connect-selective-password-hash-synchronization/exclude-5.png)
+ 6. Next, create another custom rule with password hash synchronization enabled. Select again the default rule **In from AD ΓÇô User AccountEnabled** for the Active Directory forest you want to configure selective password had synchronization on and click **Edit**. Select **yes** in the next dialog box to create an editable copy of the original rule.
+ ![Custom rule](media/how-to-connect-selective-password-hash-synchronization/exclude-6.png)
+ 7. Provide the following name to the new custom rule: **In from AD - User AccountEnabled - Users included for PHS**.
+ Change the precedence value to a number lower than the rule previously created (In this example thatΓÇÖll be **89**).
+ Make sure the checkbox **Enable Password Sync** is checked and the **Disabled** checkbox is unchecked.
+ Click **Next**.
+ ![Edit new rule](media/how-to-connect-selective-password-hash-synchronization/exclude-7.png)
+ 8. In **Scoping filter**, click **Add clause**.
+ Select **adminDescription** in the attribute column, **NOTEQUAL** in the Operator column and enter **PHSFiltered** as the value.
+ ![Scope rule](media/how-to-connect-selective-password-hash-synchronization/exclude-8.png)
+ 9. No further changes are required. **Join rules** and **Transformations** should be left with the default copied settings so you can click **Save** now.
+ Click **OK** in the warning dialog box informing a full synchronization will be run on the next synchronization cycle of the connector.
+ ![Join rules](media/how-to-connect-selective-password-hash-synchronization/exclude-9.png)
+ 10. Confirm the rules creation. Remove the filters **Password Sync** **On** and **Rule Type** **Standard**. And you should see both new rules you just created.
+ ![Confirm rules](media/how-to-connect-selective-password-hash-synchronization/exclude-10.png)
++
+### Re-enable synchronization scheduler:
+Once you completed the steps to configure the necessary synchronization rules, re-enable the synchronization scheduler with the following steps:
+ 1. In Windows PowerShell run:
+
+ ```Set-ADSyncScheduler -SyncCycleEnabled $true```
+ 2. Then confirm it has been successfully enabled by running:
+
+ ```Get-ADSyncScheduler```
+
+For more information on the scheduler see [Azure AD Connect sync scheduler](how-to-connect-sync-feature-scheduler.md).
+
+### Edit users **adminDescription** attribute:
+Once all configurations are complete, you need edit the attribute **adminDescription** for all users you wish to **exclude** from password hash synchronization in Active Directory and add the string used in the scoping filter: **PHSFiltered**.
+
+ ![Edit attribute](media/how-to-connect-selective-password-hash-synchronization/exclude-11.png)
++
+## Excluded users is larger than included users
+The following section describes how to enable selective password hash synchronization when the number of users to **exclude** is **larger** than the number of users to **include**.
+
+>[!Important]
+> Before you proceed ensure the synchronization scheduler is disabled as outlined above.
+
+The following is a summary of the actions that will be taken in the steps below:
+
+- Create an editable copy of the **In from AD ΓÇô User AccountEnabled** with the option to **enable password hash sync un-selected** and define its scoping filter
+- Create another editable copy of the default **In from AD ΓÇô User AccountEnabled** with the option to **enable password hash sync selected** and define its scoping filter
+- Re-enable the synchronization scheduler
+- Set the attribute value, in active directory, that was defined as scoping attribute on the users you want to allow in password hash synchronization.
+
+>[!Important]
+>The steps provided to configure selective password hash synchronization will only effect user objects that have
+the attribute **adminDescription** populated in Active Directory with the value of **PHSIncluded**.
+>If this attribute is not populated or the value is something other than **PHSIncluded** then these rules will not be applied to the user objects.
++
+### Configure the necessary synchronization rules:
+
+ 1. Start the synchronization Rules Editor and set the filters **Password Sync** **On** and **Rule Type** **Standard**.
+ ![Rule type](media/how-to-connect-selective-password-hash-synchronization/include-1.png)
+ 2. Select the rule **In from AD ΓÇô User AccountEnabled** for the Active Directory forest you want to configure selective password had synchronization on and click **Edit**. Select **yes** in the next dialog box to create an editable copy of the original rule.
+ ![In from AD](media/how-to-connect-selective-password-hash-synchronization/include-2.png)
+ 3. The first rule will disable password hash sync.
+ Provide the following name to the new custom rule: **In from AD - User AccountEnabled - Filter Users from PHS**.
+ Change the precedence value to a number lower than 100 (for example **90** or whichever is the lowest value available in your environment).
+ Make sure the checkboxes **Enable Password Sync** and **Disabled** are unchecked.
+ Click **Next**.
+ ![Set precedence](media/how-to-connect-selective-password-hash-synchronization/include-3.png)
+ 4. In **Scoping filter**, click **Add clause**.
+Select **adminDescription** in the attribute column, **NOTEQUAL** in the Operator column and enter **PHSIncluded** as the value.
+ ![Add clause](media/how-to-connect-selective-password-hash-synchronization/include-4.png)
+ 5. No further changes are required. **Join rules** and **Transformations** should be left with the default copied settings so you can click **Save** now.
+ Click **OK** in the warning dialog box informing a full synchronization will be run on the next synchronization cycle of the connector.
+ ![Transformation](media/how-to-connect-selective-password-hash-synchronization/include-5.png)
+ 6. Next, create another custom rule with password hash synchronization enabled. Select again the default rule **In from AD ΓÇô User AccountEnabled** for the Active Directory forest you want to configure selective password had synchronization on and click **Edit**. Select **yes** in the next dialog box to create an editable copy of the original rule.
+ ![User AccountEnabled](media/how-to-connect-selective-password-hash-synchronization/include-6.png)
+ 7. Provide the following name to the new custom rule: **In from AD - User AccountEnabled - Users included for PHS**.
+ Change the precedence value to a number lower than the rule previously created (In this example thatΓÇÖll be **89**).
+ Make sure the checkbox **Enable Password Sync** is checked and the **Disabled** checkbox is unchecked.
+ Click **Next**.
+ ![Enable Password Sync](media/how-to-connect-selective-password-hash-synchronization/include-7.png)
+ 8. In **Scoping filter**, click **Add clause**.
+ Select **adminDescription** in the attribute column, **EQUAL** in the Operator column and enter **PHSIncluded** as the value.
+ ![PHSIncluded](media/how-to-connect-selective-password-hash-synchronization/include-8.png)
+ 9. No further changes are required. **Join rules** and **Transformations** should be left with the default copied settings so you can click **Save** now.
+ Click **OK** in the warning dialog box informing a full synchronization will be run on the next synchronization cycle of the connector.
+ ![Save now](media/how-to-connect-selective-password-hash-synchronization/include-9.png)
+ 10. Confirm the rules creation. Remove the filters **Password Sync** **On** and **Rule Type** **Standard**. And you should see both new rules you just created.
+ ![Sync on](media/how-to-connect-selective-password-hash-synchronization/include-10.png)
+
+### Re-enable synchronization scheduler:
+Once you completed the steps to configure the necessary synchronization rules, re-enable the synchronization scheduler with the following steps:
+ 1. In Windows PowerShell run:
+
+ ```Set-ADSyncScheduler -SyncCycleEnabled $true```
+ 2. Then confirm it has been successfully enabled by running:
+
+ ```Get-ADSyncScheduler```
+
+For more information on the scheduler see [Azure AD Connect sync scheduler](how-to-connect-sync-feature-scheduler.md).
+
+### Edit users **adminDescription** attribute:
+Once all configurations are complete, you need edit the attribute **adminDescription** for all users you wish to **include** for password hash synchronization in Active Directory and add the string used in the scoping filter: **PHSIncluded**.
+
+ ![Edit attributes](media/how-to-connect-selective-password-hash-synchronization/include-11.png)
+
+
+
+## Next Steps
+- [What is password hash synchronization?](whatis-phs.md)
+- [How password hash sync works](how-to-connect-password-hash-synchronization.md)
active-directory Application Sign In Problem Federated Sso Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md
- Title: Problems signing in to SAML-based single sign-on configured apps
-description: Guidance for the specific errors when signing into an application you have configured for SAML-based federated single sign-on with Azure Active Directory
------- Previously updated : 02/18/2019-----
-# Problems signing in to SAML-based single sign-on configured apps
-To troubleshoot the sign-in issues below, we recommend the following to better diagnosis and automate the resolution steps:
--- Install the [My Apps Secure Browser Extension](my-apps-deployment-plan.md) to help Azure Active Directory (Azure AD) to provide better diagnosis and resolutions when using the testing experience in the Azure portal.-- Reproduce the error using the testing experience in the app configuration page in the Azure portal. Learn more on [Debug SAML-based single sign-on applications](./debug-saml-sso-issues.md)-
-If you use the [testing experience](./debug-saml-sso-issues.md) in the Azure portal with the My Apps Secure Browser Extension, you don't need to manually follow the steps below to open the SAML-based single sign-on configuration page.
-
-To open the SAML-based single sign-on configuration page:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Coadmin**.
-1. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-1. Type **ΓÇ£Azure Active Directory"** in the filter search box and select the **Azure Active Directory** item.
-1. Select **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
-1. Select **All Applications** to view a list of all your applications.
-
- If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications**.
-
-1. Select the application you want to configure for single sign-on.
-1. Once the application loads, select **Single sign-on** from the applicationΓÇÖs left-hand navigation menu.
-1. Select SAML-based SSO.
-
-## Application not found in directory
-
-`Error AADSTS70001: Application with Identifier 'https://contoso.com' was not found in the directory.`
-
-**Possible cause**
-
-The `Issuer` attribute sent from the application to Azure AD in the SAML request doesnΓÇÖt match the Identifier value that's configured for the application in Azure AD.
-
-**Resolution**
-
-Ensure that the `Issuer` attribute in the SAML request matches the Identifier value configured in Azure AD.
-
-On the SAML-based SSO configuration page, in the **Basic SAML configuration** section, verify that the value in the Identifier textbox matches the value for the identifier value displayed in the error.
-
-## The reply address does not match the reply addresses configured for the application
-`Error AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: '{application identifier}'.`
-
-**Possible cause**
-
-The `AssertionConsumerServiceURL` value in the SAML request doesn't match the Reply URL value or pattern configured in Azure AD. The `AssertionConsumerServiceURL` value in the SAML request is the URL you see in the error.
-
-**Resolution**
-
-Ensure that the `AssertionConsumerServiceURL` value in the SAML request matches the Reply URL value configured in Azure AD.
-
-Verify or update the value in the Reply URL textbox to match the `AssertionConsumerServiceURL` value in the SAML request.
-
-After you've updated the Reply URL value in Azure AD, and it matches the value sent by the application in the SAML request, you should be able to sign in to the application.
-
-## User not assigned a role
-`Error AADSTS50105: The signed in user 'brian@contoso.com' is not assigned to a role for the application.`
-
-**Possible cause**
-
-The user has not been granted access to the application in Azure AD.
-
-**Resolution**
-
-To assign one or more users to an application directly, see [Quickstart: Assign users to an app](add-application-portal-assign-users.md).
-
-## Not a valid SAML request
-`Error AADSTS75005: The request is not a valid Saml2 protocol message.`
-
-**Possible cause**
-
-Azure AD doesnΓÇÖt support the SAML request sent by the application for single sign-on. Some common issues are:
-- Missing required fields in the SAML request-- SAML request encoded method-
-**Resolution**
-
-1. Capture the SAML request. Follow the tutorial [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md) to learn how to capture the SAML request.
-1. Contact the application vendor and share the following info:
- - SAML request
- - [Azure AD Single Sign-on SAML protocol requirements](../develop/single-sign-on-saml-protocol.md)
-
-The application vendor should validate that they support the Azure AD SAML implementation for single sign-on.
-
-## Misconfigured application
-`Error AADSTS650056: Misconfigured application. This could be due to one of the following: The client has not listed any permissions in the requested permissions in the client's application registration. Or, The admin has not consented in the tenant. Or, Check the application identifier in the request to ensure it matches the configured client application identifier. Please contact your admin to fix the configuration or consent on behalf of the tenant.`
-
-**Possible cause**
-
-The `Issuer` attribute sent from the application to Azure AD in the SAML request doesnΓÇÖt match the Identifier value configured for the application in Azure AD.
-
-**Resolution**
-
-Ensure that the `Issuer` attribute in the SAML request matches the Identifier value configured in Azure AD.
-
-Verify that the value in the Identifier textbox matches the value for the identifier value displayed in the error.
-
-## Certificate or key not configured
-`Error AADSTS50003: No signing key configured.`
-
-**Possible cause**
-
-The application object is corrupted and Azure AD doesnΓÇÖt recognize the certificate configured for the application.
-
-**Resolution**
-
-To delete and create a new certificate, follow the steps below:
-1. On the SAML-based SSO configuration screen, select **Create new certificate** under the **SAML signing Certificate** section.
-1. Select Expiration date and then click **Save**.
-1. Check **Make new certificate active** to override the active certificate. Then, click **Save** at the top of the pane and accept to activate the rollover certificate.
-1. Under the **SAML Signing Certificate** section, click **remove** to remove the **Unused** certificate.
-
-## SAML Request not present in the request
-`Error AADSTS750054: SAMLRequest or SAMLResponse must be present as query string parameters in HTTP request for SAML Redirect binding.`
-
-**Possible cause**
-
-Azure AD wasnΓÇÖt able to identify the SAML request within the URL parameters in the HTTP request. This can happen if the application is not using HTTP redirect binding when sending the SAML request to Azure AD.
-
-**Resolution**
-
-The application needs to send the SAML request encoded into the location header using HTTP redirect binding. For more information about how to implement it, read the section HTTP Redirect Binding in the [SAML protocol specification document](https://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf).
-
-## Azure AD is sending the token to an incorrect endpoint
-**Possible cause**
-
-During single sign-on, if the sign-in request does not contain an explicit reply URL (Assertion Consumer Service URL) then Azure AD will select any of the configured reply URLs for that application. Even if the application has an explicit reply URL configured, the user may be to redirected https://127.0.0.1:444.
-
-When the application was added as a non-gallery app, Azure Active Directory created this reply URL as a default value. This behavior has changed and Azure Active Directory no longer adds this URL by default.
-
-**Resolution**
-
-Delete the unused reply URLs configured for the application.
-
-On the SAML-based SSO configuration page, in the **Reply URL (Assertion Consumer Service URL)** section, delete unused or default Reply URLs created by the system. For example, `https://127.0.0.1:444/applications/default.aspx`.
--
-## Authentication method by which the user authenticated with the service doesn't match requested authentication method
-`Error: AADSTS75011 Authentication method by which the user authenticated with the service doesn't match requested authentication method 'AuthnContextClassRef'. `
-
-**Possible cause**
-
-The `RequestedAuthnContext` is in the SAML request. This means the app is expecting the `AuthnContext` specified by the `AuthnContextClassRef`. However, the user has already authenticated prior to access the application and the `AuthnContext` (authentication method) used for that previous authentication is different from the one being requested. For example, a federated user access to myapps and WIA occurred. The `AuthnContextClassRef` will be `urn:federation:authentication:windows`. AAD wonΓÇÖt perform a fresh authentication request, it will use the authentication context that was passed-through it by the IdP (ADFS or any other federation service in this case). Therefore, there will be a mismatch if the app requests other than `urn:federation:authentication:windows`. Another scenario is when MultiFactor was used: `'X509, MultiFactor`.
-
-**Resolution**
--
-`RequestedAuthnContext` is an optional value. Then, if possible, ask the application if it could be removed.
-
-Another option is to make sure the `RequestedAuthnContext` will be honored. This will be done by requesting a fresh authentication. By doing this, when the SAML request is processed, a fresh authentication will be done and the `AuthnContext` will be honored. To request a Fresh Authentication the SAML request most contain the value `forceAuthn="true"`.
---
-## Problem when customizing the SAML claims sent to an application
-To learn how to customize the SAML attribute claims sent to your application, see [Claims mapping in Azure Active Directory](../develop/active-directory-claims-mapping.md).
-
-## Errors related to misconfigured apps
-Verify both the configurations in the portal match what you have in your app. Specifically, compare Client/Application ID, Reply URLs, Client Secrets/Keys, and App ID URI.
-
-Compare the resource youΓÇÖre requesting access to in code with the configured permissions in the **Required Resources** tab to make sure you only request resources youΓÇÖve configured.
-
-## Next steps
-- [Quickstart Series on Application Management](add-application-portal-assign-users.md)-- [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md)-- [Azure AD Single Sign-on SAML protocol requirements](../develop/single-sign-on-saml-protocol.md)
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Your applications are likely using the following types of authentication:
- On-premises federation solutions (such as Active Directory Federation Services (ADFS) and Ping) -- Active Directory (such as Kerberos Auth and Windows Integrated Auth)
+- Active Directory (such as Kerberos Auth and Windows-Integrated Auth)
- Other cloud-based identity and access management (IAM) solutions (such as Okta or Oracle)
Your applications are likely using the following types of authentication:
Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD enables you get the benefits these capabilities provide.
-See additional migration resources at [https://aka.ms/migrateapps](./migration-resources.md)
+You can find more migration resources at [https://aka.ms/migrateapps](./migration-resources.md)
## Benefits of migrating app authentication to Azure AD
Safeguarding your apps requires that you have a full view of all the risk factor
### Manage cost
-Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via M365 licenses, there is no reason to pay the added cost of another IAM solution.
+Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via Microsoft 365 licenses, there is no reason to pay the added cost of another IAM solution.
**With Azure AD, you can reduce infrastructure costs by:**
Economics and security benefits drive organizations to adopt Azure AD, but full
- Improve end-user [Single Sign-On (SSO)](./what-is-single-sign-on.md) experience through seamless and secure access to any application, from any device and any location. -- Leverage self-service IAM capabilities, such as [Self-Service Password Resets](../authentication/concept-sspr-howitworks.md) and [SelfService Group Management](../enterprise-users/groups-self-service-management.md).
+- Use self-service IAM capabilities, such as [Self-Service Password Resets](../authentication/concept-sspr-howitworks.md) and [SelfService Group Management](../enterprise-users/groups-self-service-management.md).
- Reduce administrative overhead by managing only a single identity for each user across cloud and on-premises environments:
Economics and security benefits drive organizations to adopt Azure AD, but full
- Enable developers to secure access to their apps and improve the end-user experience by using the [Microsoft Identity Platform](../develop/v2-overview.md) with the Microsoft Authentication Library (MSAL). -- Empower your partners with access to cloud resources using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). This removes the overhead of configuring point-to-point federation with your partners.
+- Empower your partners with access to cloud resources using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Cloud resources remove the overhead of configuring point-to-point federation with your partners.
### Address compliance and governance
-Ensure compliance with regulatory requirements by enforcing corporate access policies and monitoring user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that leverage [Security Incident and Event Monitoring (SIEM) tools](../reports-monitoring/plan-monitoring-and-reporting.md). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
+Ensure compliance with regulatory requirements by enforcing corporate access policies and monitoring user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that use [Security Incident and Event Monitoring (SIEM) tools](../reports-monitoring/plan-monitoring-and-reporting.md). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
## Plan your migration phases and project strategy
In the following table you will find the minimum suggested communication to keep
| Communication | Audience | | | - |
-| Awareness and business / technical value of project | All except end-users |
+| Awareness and business / technical value of project | All except end users |
| Solicitation for pilot apps | - App business owners<br />- App technical owners<br />- Architects and Identity team | **Phase 1- Discover and Scope**:
Legacy apps that you choose to modernize
For legacy apps that you want to modernize, moving to Azure AD for core authentication and authorization unlocks all the power and data-richness that the [Microsoft Graph](https://developer.microsoft.com/graph/gallery/?filterBy=Samples,SDKs) and [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence?rtc=1) have to offer.
-We recommend **updating the authentication stack code** for these applications from the legacy protocol (such as Windows Integrated Authentication, Kerberos Constrained Delegation, HTTP Headers-based authentication) to a modern protocol (such as SAML or OpenID Connect).
+We recommend **updating the authentication stack code** for these applications from the legacy protocol (such as Windows-Integrated Authentication, Kerberos Constrained Delegation, HTTP Headers-based authentication) to a modern protocol (such as SAML or OpenID Connect).
### Legacy apps that you choose NOT to modernize
Apps without clear owners and clear maintenance and monitoring present a securit
- there is clearly **no usage**.
-Of course, **do not deprecate high impact, business-critical applications**. In those cases, work with business owners to determine the right strategy.
+We recommend that you **do not deprecate high impact, business-critical applications**. In those cases, work with business owners to determine the right strategy.
### Exit criteria
You are successful in this phase with:
- A list of apps that includes:
- - What systems those apps connect to o From where and on what devices users access them
-
+ - What systems those apps connect to
+ - From where and on what devices users access them
- Whether they will be migrated, deprecated, or connected with [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md). > [!NOTE]
Information that is important to making your migration decision includes:
- **App name** ΓÇô what is this app known as to the business? -- **App type** ΓÇô is it a 3rd party SaaS app? A custom line of business web app? An API?
+- **App type** ΓÇô is it a third-party SaaS app? A custom line-of-business web app? An API?
- **Business criticality** ΓÇô is its high criticality? Low? Or somewhere in between? - **User access volume** ΓÇô does everyone access this app or just a few people? -- **Planned lifespan** ΓÇô how long will this app be around? Less than 6 months? More than 2 years?
+- **Planned lifespan** ΓÇô how long will this app be around? Less than six months? More than two years?
- **Current identity provider** ΓÇô what is the primary IdP for this app? Or does it rely on local storage?
Information that is important to making your migration decision includes:
- **Whether you plan to update the app code** ΓÇô is the app under planned or active development? -- **Whether you plan to keep the app on-premises** ΓÇô do you want to keep the app in your datacenter long-term?
+- **Whether you plan to keep the app on-premises** ΓÇô do you want to keep the app in your datacenter long term?
- **Whether the app depends on other apps or APIs** ΓÇô does the app currently call into other apps or APIs?
Once you have classified your application and documented the details, then be su
The app(s) you select for the pilot should represent the key identity and security requirements of your organization, and you must have clear buy-in from the application owners. Pilots typically run in a separate test environment. See [best practices for pilots](../fundamentals/active-directory-deployment-plans.md#best-practices-for-a-pilot) on the deployment plans page.
-**DonΓÇÖt forget about your external partners.** Make sure that they participate in migration schedules and testing. Finally, ensure they have a way to access your helpdesk in case of breaking issues.
+**DonΓÇÖt forget about your external partners.** Make sure that they participate in migration schedules and testing. Finally, ensure they have a way to access your helpdesk if there were breaking issues.
### Plan for limitations
Business critical and universally used applications may need a group of pilot us
### Plan the security posture
-Before you initiate the migration process, take time to fully consider the security posture you wish to develop for your corporate identity system. This is based on gathering these valuable sets of information: **Identities and data, who is accessing your data, and devices and locations**.
+Before you initiate the migration process, take time to fully consider the security posture you wish to develop for your corporate identity system. This is based on gathering these valuable sets of information: **Identities, devices, and locations that are accessing your data.**
### Identities and data Most organizations have specific requirements about identities and data protection that vary by industry segment and by job functions within organizations. Refer to [identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) for our recommendations including a prescribed set of [conditional access policies](../conditional-access/overview.md) and related capabilities.
-You can use this information to protect access to all services integrated with Azure AD. These recommendations are aligned with Microsoft Secure Score as well as the [identity score in Azure AD](../fundamentals/identity-secure-score.md). The score helps you to:
+You can use this information to protect access to all services integrated with Azure AD. These recommendations are aligned with Microsoft Secure Score and the [identity score in Azure AD](../fundamentals/identity-secure-score.md). The score helps you to:
- Objectively measure your identity security posture
There are two main categories of users of your apps and resources that Azure AD
You can define groups for these users and populate these groups in diverse ways. You may choose that an administrator must manually add members into a group, or you can enable selfservice group membership. Rules can be established that automatically add members into groups based on the specified criteria using [dynamic groups](../enterprise-users/groups-dynamic-membership.md).
-External users may also refer to customers which requires special consideration. [Azure AD B2C](../../active-directory-b2c/overview.md), a separate product supports customer authentication. However, it is outside the scope of this paper.
+External users may also refer to customers. [Azure AD B2C](../../active-directory-b2c/overview.md), a separate product supports customer authentication. However, it is outside the scope of this paper.
### Device/location used to access data
Use the tools and guidance below to follow the precise steps needed to migrate y
- **Applications running on-premises** ΓÇô Learn all [about the Azure AD Application Proxy](./application-proxy.md) and use the complete [Azure AD Application Proxy deployment plan](https://aka.ms/AppProxyDPDownload) to get going quickly. -- **Apps youΓÇÖre developing** ΓÇô Read our step by step [integration](../develop/quickstart-register-app.md) and [registration](../develop/quickstart-register-app.md) guidance.
+- **Apps youΓÇÖre developing** ΓÇô Read our step-by-step [integration](../develop/quickstart-register-app.md) and [registration](../develop/quickstart-register-app.md) guidance.
After migration, you may choose to send communication informing the users of the successful deployment and remind them of any new steps that they need to take.
After migration, you may choose to send communication informing the users of the
During the process of the migration, your app may already have a test environment used during regular deployments. You can continue to use this environment for migration testing. If a test environment is not currently available, you may be able to set one up using Azure App Service or Azure Virtual Machines, depending on the architecture of the application. You may choose to set up a separate test Azure AD tenant to use as you develop your app configurations. This tenant will start in a clean state and will not configured to sync with any system.
-You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users will need to update their [MFA](/active-directory/authentication/howto-mfa-userstates) or [SSPR](../authentication/tutorial-enable-sspr.md)settings, or you are adding this functionality during the migration, be sure to add that to your end user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
+You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users will need to update their [MFA](/active-directory/authentication/howto-mfa-userstates) or [SSPR](../authentication/tutorial-enable-sspr.md)settings, or you are adding this functionality during the migration, be sure to add that to your end-user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
-Once you have migrated the apps, go to the [Azure Portal](https://aad.portal.azure.com/) to test if the migration was a success. Follow the instructions below:
+Once you have migrated the apps, go to the [Azure portal](https://aad.portal.azure.com/) to test if the migration was a success. Follow the instructions below:
- Select **Enterprise Applications &gt; All applications** and find your app from the list.
Depending on how you configure your app, verify that SSO works properly.
### Troubleshoot
-If you run into problems, check out our [apps troubleshooting guide](../app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md) to get help. See also [Problems signing in to a custom-developed application](./application-sign-in-problem-federated-sso-gallery.md).
+If you run into problems, check out our [apps troubleshooting guide](../app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md) to get help. You can also check out our troubleshooting articles, see [Problems signing in to SAML-based single sign-on configured apps](/troubleshoot/azure/active-directory/troubleshoot-sign-in-saml-based-apps).
### Plan rollback
-If your migration fails, the best strategy is to rollback and test. Here are the steps that you can take to mitigate migration issues:
+If your migration fails, the best strategy is to roll back and test. Here are the steps that you can take to mitigate migration issues:
- **Take screenshots** of the existing configuration of your app. You can look back if you must reconfigure the app once again. -- You might also consider **providing links to the legacy authentication**, in case of issues with cloud authentication.
+- You might also consider **providing links to the legacy authentication**, if there was issues with cloud authentication.
- Before you complete your migration, **do not change your existing configuration** with the earlier identity provider. - Begin by migrating **the apps that support multiple IdPs**. If something goes wrong, you can always change to the preferred IdPΓÇÖs configuration. -- Ensure that your app experience has a **Feedback button** or pointers to your **helpdesk** in case of issues.
+- Ensure that your app experience has a **Feedback button** or pointers to your **helpdesk** issues.
### Exit criteria
Azure AD provides a centralized access location to manage your migrated apps. Go
- **Secure user access to apps.** Enable [Conditional Access policies](../conditional-access/overview.md)or [Identity Protection](../identity-protection/overview-identity-protection.md)to secure user access to applications based on device state, location, and more. -- **Automatic provisioning.** Set up [automatic provisioning of users](../app-provisioning/user-provisioning.md) with a variety of third-party SaaS apps that users need to access. In addition to creating user identities, it includes the maintenance and removal of user identities as status or roles change.
+- **Automatic provisioning.** Set up [automatic provisioning of users](../app-provisioning/user-provisioning.md) with various third-party SaaS apps that users need to access. In addition to creating user identities, it includes the maintenance and removal of user identities as status or roles change.
- **Delegate user access** **management**. As appropriate, enable self-service application access to your apps and *assign a business approver to approve access to those apps*. Use [Self-Service Group Management](../enterprise-users/groups-self-service-management.md)for groups assigned to collections of apps.
Azure AD provides a centralized access location to manage your migrated apps. Go
You can also use the [Azure portal](https://portal.azure.com/) to audit all your apps from a centralized location, -- **Audit your app** using **Enterprise Applications, Audit** or access the same information from the [Azure AD Reporting API](../reports-monitoring/concept-reporting-api.md) to integrate into your favorite tools.
+- **Audit your app** using **Enterprise Applications, Audit, or access the same information from the [Azure AD Reporting API](../reports-monitoring/concept-reporting-api.md) to integrate into your favorite tools.
- **View the permissions for an app** using **Enterprise Applications, Permissions** for apps using OAuth / OpenID Connect. - **Get sign-in insights** using **Enterprise Applications, Sign-Ins**. Access the same information from the [Azure AD Reporting API.](../reports-monitoring/concept-reporting-api.md) -- **Visualize your appΓÇÖs usage** from the [Azure AD PowerBI content pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md)
+- **Visualize your appΓÇÖs usage** from the [Azure AD Power BI content pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md)
### Exit criteria
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-sso-deployment.md
The following links present troubleshooting scenarios. You may want to create a
- [Problem signing into a Microsoft application](./application-sign-in-problem-first-party-microsoft.md)
-#### SSO issues for applications listed in the Azure Application Gallery
+#### SSO issues for applications
-- [Problem with password SSO for applications listed in the Azure Application Gallery](./troubleshoot-password-based-sso.md)
+- [Problem with password SSO for applications](./troubleshoot-password-based-sso.md)
-- [Problem with federated SSO for applications listed in the Azure Application Gallery](./application-sign-in-problem-federated-sso-gallery.md)
+- [Problems signing in to SAML-based single sign-on configured apps](/troubleshoot/azure/active-directory/troubleshoot-sign-in-saml-based-apps)
-#### SSO issues for applications NOT listed in the Azure Application Gallery
--- [Problem with password SSO for applications NOT listed in the Azure Application Gallery](./troubleshoot-password-based-sso.md) --- [Problem with federated SSO for applications NOT listed in the Azure Application Gallery](./application-sign-in-problem-federated-sso-gallery.md) ## Next steps
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Welcome to what's new in Azure Active Directory application management documenta
- [Grant tenant-wide admin consent to an application](grant-admin-consent.md) - [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md) - [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md)-- [Problems signing in to SAML-based single sign-on configured apps](application-sign-in-problem-federated-sso-gallery.md) - [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md) ## January 2021
Welcome to what's new in Azure Active Directory application management documenta
### Updated articles - [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Problems signing in to SAML-based single sign-on configured apps](application-sign-in-problem-federated-sso-gallery.md) ## October 2020
Welcome to what's new in Azure Active Directory application management documenta
### Updated articles -- [Problems signing in to SAML-based single sign-on configured apps](application-sign-in-problem-federated-sso-gallery.md) - [Problem installing the Application Proxy Agent Connector](application-proxy-connector-installation-problem.md) - [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md) - [Configure how end-users consent to applications](configure-user-consent.md)
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
na Previously updated : 09/23/2020 Last updated : 03/16/2021
Each tab in the sign-ins blade shows the default columns below. Some tabs have a
Interactive user sign-ins are sign-ins where a user provides an authentication factor to Azure AD or interacts directly with Azure AD or a helper app, such as the Microsoft Authenticator app. The factors users provide include passwords, responses to MFA challenges, biometric factors, or QR codes that a user provides to Azure AD or to a helper app.
-This report also includes federated sign-ins from identity providers that are federated to Azure AD.
+> [!NOTE]
+> This report also includes federated sign-ins from identity providers that are federated to Azure AD.
+++
+Note: The interactive user sign-ins report used to contain some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non interactive, they were included in the interactive user sign-ins report for additional visibility. Once the non-interactive user sign-ins report entered public preview in November 2020, those non-interactive sign-in event logs were moved to the non-interactive user sign in report for increased accuracy.
**Report size:** small <br>
active-directory Github Enterprise Managed User Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-managed-user-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise Managed User | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and GitHub Enterprise Managed User.
++++++++ Last updated : 03/15/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise Managed User
+
+In this tutorial, you'll learn how to integrate GitHub Enterprise Managed User with Azure Active Directory (Azure AD). When you integrate GitHub Enterprise Managed User with Azure AD, you can:
+
+* Control in Azure AD who has access to GitHub Enterprise Managed User.
+* Enable your users to be automatically signed-in to GitHub Enterprise Managed User with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* GitHub Enterprise Managed User single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* GitHub Enterprise Managed User supports **SP and IDP** initiated SSO.
+* GitHub Enterprise Managed User supports **Just In Time** user provisioning.
+* GitHub Enterprise Managed User supports [**Automated** user provisioning](https://docs.microsoft.com/azure/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial).
+
+## Adding GitHub Enterprise Managed User from the gallery
+
+To configure the integration of GitHub Enterprise Managed User into Azure AD, you need to add GitHub Enterprise Managed User from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **GitHub Enterprise Managed User** in the search box.
+1. Select **GitHub Enterprise Managed User** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for GitHub Enterprise Managed User
+
+Configure and test Azure AD SSO with GitHub Enterprise Managed User using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in GitHub Enterprise Managed User.
+
+To configure and test Azure AD SSO with GitHub Enterprise Managed User, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure GitHub Enterprise Managed User SSO](#configure-github-enterprise-managed-user-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create GitHub Enterprise Managed User test user](#create-github-enterprise-managed-user-test-user)** - to have a counterpart of B.Simon in GitHub Enterprise Managed User that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **GitHub Enterprise Managed User** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://github.com/enterprise-managed/<ENTITY>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://github.com/enterprises/<ENTITY>/saml/consume`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://github.com/enterprises/<ENTITY>/sso`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [GitHub Enterprise Managed User Client support team](mailto:support@github.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up GitHub Enterprise Managed User** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to GitHub Enterprise Managed User.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **GitHub Enterprise Managed User**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure GitHub Enterprise Managed User SSO
+
+To configure single sign-on on **GitHub Enterprise Managed User** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [GitHub Enterprise Managed User support team](mailto:support@github.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create GitHub Enterprise Managed User test user
+
+In this section, a user called B.Simon is created in GitHub Enterprise Managed User. GitHub Enterprise Managed User supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in GitHub Enterprise Managed User, a new one is created when you attempt to access GitHub Enterprise Managed User.
+
+GitHub Enterprise Managed User also supports automatic user provisioning, you can find more details [here](https://docs.microsoft.com/azure/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial) on how to configure automatic user provisioning.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to GitHub Enterprise Managed User Sign on URL where you can initiate the login flow.
+
+* Go to GitHub Enterprise Managed User Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the GitHub Enterprise Managed User for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the GitHub Enterprise Managed User tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the GitHub Enterprise Managed User for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure GitHub Enterprise Managed User you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
helm repo update
# Install the cert-manager Helm chart helm install cert-manager jetstack/cert-manager \ --namespace ingress-basic \
- --version v0.16.1 \
--set installCRDs=true \ --set nodeSelector."kubernetes\.io/os"=linux \ --set webhook.nodeSelector."kubernetes\.io/os"=linux \
Before certificates can be issued, cert-manager requires an [Issuer][cert-manage
Create a cluster issuer, such as `cluster-issuer.yaml`, using the following example manifest. Update the email address with a valid address from your organization: ```yaml
-apiVersion: cert-manager.io/v1alpha2
+apiVersion: cert-manager.io/v1
kind: ClusterIssuer metadata: name: letsencrypt
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/limit-egress-traffic.md
description: Learn what ports and addresses are required to control egress traff
Previously updated : 11/09/2020 Last updated : 01/12/2021 #Customer intent: As an cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
The following FQDN / application rules are required for AKS clusters that have t
| **`raw.githubusercontent.com`** | **`HTTPS:443`** | This address is used to pull the built-in policies from GitHub to ensure correct operation of Azure Policy. | | **`dc.services.visualstudio.com`** | **`HTTPS:443`** | Azure Policy add-on that sends telemetry data to applications insights endpoint. |
+#### Azure China 21Vianet Required FQDN / application rules
+
+The following FQDN / application rules are required for AKS clusters that have the Azure Policy enabled.
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`data.policy.azure.cn`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
+| **`store.policy.azure.cn`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
+
+#### Azure US Government Required FQDN / application rules
+
+The following FQDN / application rules are required for AKS clusters that have the Azure Policy enabled.
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`data.policy.azure.us`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
+| **`store.policy.azure.us`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
+ ## Restrict egress traffic using Azure firewall Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) FQDN Tag to simplify this configuration.
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
az extension add --name aks-preview
az extension update --name aks-preview ```
-## Create an AKS cluster with managed identities
+## Create an AKS cluster with Azure CNI
-Create an AKS cluster with a managed identity and pod-managed identity enabled. The following commands use [az group create][az-group-create] to create a resource group named *myResourceGroup* and the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster* in the *myResourceGroup* resource group.
+> [!NOTE]
+> This is the default recommended configuration
+
+Create an AKS cluster with Azure CNI and pod-managed identity enabled. The following commands use [az group create][az-group-create] to create a resource group named *myResourceGroup* and the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster* in the *myResourceGroup* resource group.
```azurecli-interactive az group create --name myResourceGroup --location eastus
-az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity --network-plugin azure
+az aks create -g myResourceGroup -n myAKSCluster --enable-pod-identity --network-plugin azure
``` Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your development computer.
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS clus
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```+
+## Update an existing AKS cluster with Azure CNI
+
+Update an existing AKS cluster with Azure CNI to include pod-managed identity.
+
+```azurecli-interactive
+az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity --network-plugin azure
+```
+## Using Kubenet network plugin with Azure Active Directory pod-managed identities
+
+> [!IMPORTANT]
+> Running aad-pod-identity in a cluster with Kubenet is not a recommended configuration because of the security implication. Please follow the mitigation steps and configure policies before enabling aad-pod-identity in a cluster with Kubenet.
+
+## Mitigation
+
+To mitigate the vulnerability at the cluster level, you can use OpenPolicyAgent admission controller together with Gatekeeper validating webhook. Provided you have Gatekeeper already installed in your cluster, add the ConstraintTemplate of type K8sPSPCapabilities:
+
+```
+kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/pod-security-policy/capabilities/template.yaml
+```
+Add a template to limit the spawning of Pods with the NET_RAW capability:
+
+```
+apiVersion: constraints.gatekeeper.sh/v1beta1
+kind: K8sPSPCapabilities
+metadata:
+ name: prevent-net-raw
+spec:
+ match:
+ kinds:
+ - apiGroups: [""]
+ kinds: ["Pod"]
+ excludedNamespaces:
+ - "kube-system"
+ parameters:
+ requiredDropCapabilities: ["NET_RAW"]
+```
+ ## Create an AKS cluster with Kubenet network plugin Create an AKS cluster with Kubenet network plugin and pod-managed identity enabled.
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-advanced-policies.md
Expressions used in the `set-variable` policy must return one of the following b
The `trace` policy adds a custom trace into the API Inspector output, Application Insights telemetries, and/or Resource Logs. - The policy adds a custom trace to the [API Inspector](./api-management-howto-api-inspector.md) output when tracing is triggered, i.e. `Ocp-Apim-Trace` request header is present and set to true and `Ocp-Apim-Subscription-Key` request header is present and holds a valid key that allows tracing.-- The policy creates a [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` level specified in the policy is at or higher than the `verbosity` level specified in the diagnostic setting.
+- The policy creates a [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the diagnostic setting.
- The policy adds a property in the log entry when [Resource Logs](./api-management-howto-use-azure-monitor.md#activity-logs) is enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting. ### Policy statement
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-app-insights.md
Title: Integrate Azure API Management with Azure Application Insights
description: Learn how to log and view events from Azure API Management in Azure Application Insights. - - na Previously updated : 06/20/2018 Last updated : 02/25/2021
Azure API Management allows for easy integration with Azure Application Insights
To follow this guide, you need to have an Azure API Management instance. If you don't have one, complete the [tutorial](get-started-create-service-instance.md) first.
-## Create an Azure Application Insights instance
+## Create an Application Insights instance
-Before you can use Azure Application Insights, you first need to create an instance of the service.
-
-1. Open the **Azure portal** and navigate to **Application Insights**.
- ![Screenshot that shows how to navigate to Application Insights.](media/api-management-howto-app-insights/apim-app-insights-instance-1.png)
-2. Click **+ Add**.
- ![App Insights create](media/api-management-howto-app-insights/apim-app-insights-instance-2.png)
-3. Fill the form. Select **General** as the **Application Type**.
-4. Click **Create**.
-
-## Create a connection between Azure Application Insights and Azure API Management service instance
+Before you can use Application Insights, you first need to create an instance of the service. For steps to create an instance using the Azure portal, see [Workspace-based Application Insights resources](../azure-monitor/app/create-workspace-resource.md).
+## Create a connection between Application Insights and API Management
1. Navigate to your **Azure API Management service instance** in the **Azure portal**.
-2. Select **Application Insights** from the menu on the left.
-3. Click **+ Add**.
- ![Screenshot that shows where to add a new connection.](media/api-management-howto-app-insights/apim-app-insights-logger-1.png)
-4. Select the previously created **Application Insights** instance and provide a short description.
-5. Click **Create**.
-6. You have just created an Azure Application Insights logger with an instrumentation key. It should now appear in the list.
- ![Screenshot that shows where to view the newly created Azure Application Insights logger with instrumentation key.](media/api-management-howto-app-insights/apim-app-insights-logger-2.png)
+1. Select **Application Insights** from the menu on the left.
+1. Click **+ Add**.
+ :::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-logger-1.png" alt-text="Screenshot that shows where to add a new connection":::
+1. Select the previously created **Application Insights** instance and provide a short description.
+1. Click **Create**.
+1. You have just created an Application Insights logger with an instrumentation key. It should now appear in the list.
+ :::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-logger-2.png" alt-text="Screenshot that shows where to view the newly created Application Insights logger with instrumentation key":::
> [!NOTE] > Behind the scene, a [Logger](/rest/api/apimanagement/2019-12-01/logger/createorupdate) entity is created in your API Management instance, containing the Instrumentation Key of the Application Insights instance.
Before you can use Azure Application Insights, you first need to create an insta
## Enable Application Insights logging for your API 1. Navigate to your **Azure API Management service instance** in the **Azure portal**.
-2. Select **APIs** from the menu on the left.
-3. Click on your API, in this case **Demo Conference API**.
-4. Go to the **Settings** tab from the top bar.
-5. Scroll down to the **Diagnostics Logs** section.
- ![App Insights logger](media/api-management-howto-app-insights/apim-app-insights-api-1.png)
-6. Check the **Enable** box.
-7. Select your attached logger in the **Destination** dropdown.
-8. Input **100** as **Sampling (%)** and tick the **Always log errors** checkbox.
-9. Click **Save**.
+1. Select **APIs** from the menu on the left.
+1. Click on your API, in this case **Demo Conference API**. If configured, select a version.
+1. Go to the **Settings** tab from the top bar.
+1. Scroll down to the **Diagnostics Logs** section.
+ :::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-api-1.png" alt-text="App Insights logger":::
+1. Check the **Enable** box.
+1. Select your attached logger in the **Destination** dropdown.
+1. Input **100** as **Sampling (%)** and select the **Always log errors** checkbox.
+1. Select **Save**.
> [!WARNING]
-> Overriding the default value **0** in the **First bytes of body** field may significantly decrease the performance of your APIs.
+> Overriding the default value **0** in the **Number of payload bytes to log** setting may significantly decrease the performance of your APIs.
> [!NOTE] > Behind the scene, a [Diagnostic](/rest/api/apimanagement/2019-12-01/diagnostic/createorupdate) entity named 'applicationinsights' is created at the API level.
Before you can use Azure Application Insights, you first need to create an insta
|-|--|--| | Enable | boolean | Specifies whether logging of this API is enabled. | | Destination | Azure Application Insights logger | Specifies Azure Application Insights logger to be used |
-| Sampling (%) | decimal | Values from 0 to 100 (percent). <br/> Specifies what percentage of requests will be logged to Azure Application Insights. 0% sampling means zero requests logged, while 100% sampling means all requests logged. <br/> This setting is used for reducing performance implications of logging requests to Azure Application Insights (see the section below). |
-| Always log errors | boolean | If this setting is selected, all failures will be logged to Azure Application Insights, regardless of the **Sampling** setting. |
-| Basic Options: Headers | list | Specifies the headers that will be logged to Azure Application Insights for requests and responses. Default: no headers are logged. |
-| Basic Options: First bytes of body | integer | Specifies how many first bytes of the body are logged to Azure Application Insights for requests and responses. Default: body is not logged. |
-| Advanced Options: Verbosity | | Specifies the verbosity level. Only custom traces with higher severity level will be logged. Default: Information. |
-| Advanced Options: Frontend Request | | Specifies whether and how *frontend requests* will be logged to Azure Application Insights. *Frontend request* is a request incoming to the Azure API Management service. |
-| Advanced Options: Frontend Response | | Specifies whether and how *frontend responses* will be logged to Azure Application Insights. *Frontend response* is a response outgoing from the Azure API Management service. |
-| Advanced Options: Backend Request | | Specifies whether and how *backend requests* will be logged to Azure Application Insights. *Backend request* is a request outgoing from the Azure API Management service. |
-| Advanced Options: Backend Response | | Specifies whether and how *backend responses* will be logged to Azure Application Insights. *Backend response* is a response incoming to the Azure API Management service. |
+| Sampling (%) | decimal | Values from 0 to 100 (percent). <br/> Specifies the percentage of requests that will be logged to Application Insights. 0% sampling means zero requests logged, while 100% sampling means all requests logged. <br/> Use this setting to reduce effect on performance when logging requests to Application Insights. See [Performance implications and log sampling](#performance-implications-and-log-sampling). |
+| Always log errors | boolean | If this setting is selected, all failures will be logged to Application Insights, regardless of the **Sampling** setting.
+| Log client IP address | | If this setting is selected, the client IP address for API requests will be logged to Application Insights. |
+| Verbosity | | Specifies the verbosity level. Only custom traces with higher severity level will be logged. Default: Information. |
+| Correlation protocol | | Select protocol used to correlate telemetry sent by multiple components. Default: Legacy <br/>For information, see [Telemetry correlation in Application Insights](../azure-monitor/app/correlation.md). |
+| Basic Options: Headers to log | list | Specifies the headers that will be logged to Application Insights for requests and responses. Default: no headers are logged. |
+| Basic Options: Number of payload bytes to log | integer | Specifies how many first bytes of the body are logged to Application Insights for requests and responses. Default: 0. |
+| Advanced Options: Frontend Request | | Specifies whether and how *frontend requests* will be logged to Application Insights. *Frontend request* is a request incoming to the Azure API Management service. |
+| Advanced Options: Frontend Response | | Specifies whether and how *frontend responses* will be logged to Application Insights. *Frontend response* is a response outgoing from the Azure API Management service. |
+| Advanced Options: Backend Request | | Specifies whether and how *backend requests* will be logged to Application Insights. *Backend request* is a request outgoing from the Azure API Management service. |
+| Advanced Options: Backend Response | | Specifies whether and how *backend responses* will be logged to Application Insights. *Backend response* is a response incoming to the Azure API Management service. |
> [!NOTE] > You can specify loggers on different levels - single API logger or a logger for all APIs.
Before you can use Azure Application Insights, you first need to create an insta
> + if they are different loggers, then both of them will be used (multiplexing logs), > + if they are the same loggers but have different settings, then the one for single API (more granular level) will override the one for all APIs.
-## What data is added to Azure Application Insights
+## What data is added to Application Insights
-Azure Application Insights receives:
+Application Insights receives:
+ *Request* telemetry item, for every incoming request (*frontend request*, *frontend response*), + *Dependency* telemetry item, for every request forwarded to a backend service (*backend request*, *backend response*),
-+ *Exception* telemetry item, for every failed request.
++ *Exception* telemetry item, for every failed request:
+ + failed because of a closed client connection
+ + triggered an *on-error* section of the API policies
+ + has a response HTTP status code matching 4xx or 5xx.
++ *Trace* telemetry item, if you configure a [trace](api-management-advanced-policies.md#Trace) policy. The `severity` setting in the `trace` policy must be equal to or greater than the `verbosity` setting in the Application Insights logging.
-A failed request is a request, which:
-
-+ failed because of a closed client connection, or
-+ triggered an *on-error* section of the API policies, or
-+ has a response HTTP status code matching 4xx or 5xx.
+> [!NOTE]
+> See [Application Insights limits](../azure-monitor/service-limits.md#application-insights) for information about the maximum size and number of metrics and events per Application Insights instance.
## Performance implications and log sampling > [!WARNING] > Logging all events may have serious performance implications, depending on incoming requests rate.
-Based on internal load tests, enabling this feature caused a 40%-50% reduction in throughput when request rate exceeded 1,000 requests per second. Azure Application Insights is designed to use statistical analysis for assessing application performances. It is not intended to be an audit system and is not suited for logging each individual request for high-volume APIs.
+Based on internal load tests, enabling this feature caused a 40%-50% reduction in throughput when request rate exceeded 1,000 requests per second. Application Insights is designed to use statistical analysis for assessing application performances. It is not intended to be an audit system and is not suited for logging each individual request for high-volume APIs.
-You can manipulate the number of requests being logged by adjusting the **Sampling** setting (see the steps above). Value 100% means all requests are logged, while 0% reflects no logging at all. **Sampling** helps to reduce volume of telemetry, effectively preventing from significant performance degradation, while still carrying the benefits of logging.
+You can manipulate the number of requests being logged by adjusting the **Sampling** setting (see the preceding steps). A value of 100% means all requests are logged, while 0% reflects no logging. **Sampling** helps to reduce volume of telemetry, effectively preventing significant performance degradation, while still carrying the benefits of logging.
Skipping logging of headers and body of requests and responses will also have positive impact on alleviating performance issues.
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-policies.md
na Previously updated : 11/19/2017 Last updated : 02/17/2021 # API Management policies
This section provides a reference for the following API Management policies. For
- [Send request to a service](api-management-dapr-policies.md#invoke) - uses Dapr runtime to locate and reliably communicate with a Dapr microservice. - [Send message to Pub/Sub topic](api-management-dapr-policies.md#pubsub) - uses Dapr runtime to publish a message to a Publish/Subscribe topic. - [Trigger output binding](api-management-dapr-policies.md#bind) - uses Dapr runtime to invoke an external system via output binding.
+- [Validation policies](validation-policies.md)
+ - [Validate content](validation-policies.md#validate-content) - Validates the size or JSON schema of a request or response body against the API schema.
+.
+ - [Validate parameters](validation-policies.md#validate-parameters) - Validates the request header, query, or path parameters against the API schema.
+ - [Validate headers](validation-policies.md#validate-headers) - Validates the response headers against the API schema.
+ - [Validate status code](validation-policies.md#validate-status-code) - Validates the HTTP status codes in responses against the API schema.
## Next steps For more information working with policies, see:
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/validation-policies.md
+
+ Title: Azure API Management validation policies | Microsoft Docs
+description: Learn about policies you can use in Azure API Management to validate requests and responses.
+
+documentationcenter: ''
++++ Last updated : 03/12/2021+++
+# API Management policies to validate requests and responses
+
+This article provides a reference for the following API Management policies. For information on adding and configuring policies, see [Policies in API Management](./api-management-policies.md).
+
+Use validation policies to validate API requests and responses against an OpenAPI schema and protect from vulnerabilities such as injection of headers or payload. While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that are not covered by security products that rely on static, predefined rules.
+
+## Validation policies
+
+- [Validate content](#validate-content) - Validates the size or JSON schema of a request or response body against the API schema.
+- [Validate parameters](#validate-parameters) - Validates the request header, query, or path parameters against the API schema.
+- [Validate headers](#validate-headers) - Validates the response headers against the API schema.
+- [Validate status code](#validate-status-code) - Validates the HTTP status codes in responses against the API schema.
+
+> [!NOTE]
+> The maximum size of the API schema that can be used by a validation policy is 4 MB. If the schema exceeds this limit, validation policies will return errors on runtime. To increase it, please contact [support](https://azure.microsoft.com/support/options/).
+
+## Actions
+
+Each validation policy includes an attribute that specifies an action, which API Management takes when validating an entity in an API request or response against the API schema. An action may be specified for elements that are represented in the API schema and, depending on the policy, for elements that aren't represented in the API schema. An action specified in a policy's child element overrides an action specified for its parent.
+
+Available actions:
+
+| Action | Description |
+| | |
+| ignore | Skip validation. |
+| prevent | Block the request or response processing, log the verbose validation error, and return an error. Processing is interrupted when the first set of errors is detected. |
+| detect | Log validation errors, without interrupting request or response processing. |
+
+## Logs
+
+Details about the validation errors during policy execution are logged to the variable in `context.Variables` specified in the `errors-variable-name` attribute in the policy's root element. When configured in a `prevent` action, a validation error blocks further request or response processing and is also propagated to the `context.LastError` property.
+
+To investigate errors, use a [trace](api-management-advanced-policies.md#Trace) policy to log the errors from context variables to [Application Insights](api-management-howto-app-insights.md).
+
+## Performance implications
+
+Adding validation policies may affect API throughput. The following general principles apply:
+* The larger the API schema size, the lower the throughput will be.
+* The larger the payload in a request or response, the lower the throughput will be.
+* The size of the API schema has a larger impact on performance than the size of the payload.
+* Validation against an API schema that is several megabytes in size may cause request or response timeouts under some conditions. The effect is more pronounced in the **Consumption** and **Developer** tiers of the service.
+
+We recommend performing load tests with your expected production workloads to assess the impact of validation policies on API throughput.
+
+## Validate content
+
+The `validate-content` policy validates the size or JSON schema of a request or response body against the API schema. Formats other than JSON aren't supported.
+
+### Policy statement
+
+```xml
+<validate-content unspecified-content-type-action="ignore|prevent|detect" max-size="size in bytes" size-exceeded-action="ignore|prevent|detect" errors-variable-name="variable name">
+ <content type="content type string, for example: application/json, application/hal+json" validate-as="json" action="ignore|prevent|detect" />
+</validate-content>
+```
+
+### Example
+
+In the following example, the JSON payload in requests and responses is validated in detection mode. Messages with payloads larger than 100 KB are blocked.
+
+```xml
+<validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
+ <content type="application/json" validate-as="json" action="detect" />
+ <content type="application/hal+json" validate-as="json" action="detect" />
+</validate-content>
+
+```
+
+### Elements
+
+| Name | Description | Required |
+| | | -- |
+| validate-content | Root element. | Yes |
+| content | Add one or more of these elements to validate the content type in the request or response, and perform the specified action. | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| -- | - | -- | - |
+| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
+| max-size | Maximum length of the body of the request or response, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). | Yes | N/A |
+| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| type | Content type to execute body validation for, checked against the `Content-Type` header. This value is case insensitive. If empty, it applies to every content type specified in the API schema. | No | N/A |
+| validate-as | Validation engine to use for validation of the body of a request or response with a matching content type. Currently, the only supported value is "json". | Yes | N/A |
+| action | [Action](#actions) to perform for requests or responses whose body doesn't match the specified content type. | Yes | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound, outbound, on-error
+
+- **Policy scopes:** all scopes
+
+## Validate parameters
+
+The `validate-parameters` policy validates the header, query, or path parameters in requests against the API schema.
+
+> [!IMPORTANT]
+> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-parameters` policy might not work. You may need to reimport your API using management API version `2021-01-01-preview` or later.
++
+### Policy statement
+
+```xml
+<validate-parameters specified-parameter-action="ignore|prevent|detect" unspecified-parameter-action="ignore|prevent|detect" errors-variable-name="variable name">
+ <headers specified-parameter-action="ignore|prevent|detect" unspecified-parameter-action="ignore|prevent|detect">
+ <parameter name="parameter name" action="ignore|prevent|detect" />
+ </headers>
+ <query specified-parameter-action="ignore|prevent|detect" unspecified-parameter-action="ignore|prevent|detect">
+ <parameter name="parameter name" action="ignore|prevent|detect" />
+ </query>
+ <path specified-parameter-action="ignore|prevent|detect">
+ <parameter name="parameter name" action="ignore|prevent|detect" />
+ </path>
+</validate-parameters>
+```
+
+### Example
+
+In this example, all query and path parameters are validated in the prevention mode and headers in the detection mode. Validation is overridden for several header parameters:
+
+```xml
+<validate-parameters specified-parameter-action="prevent" unspecified-parameter-action="prevent" errors-variable-name="requestParametersValidation">
+ <headers specified-parameter-action="detect" unspecified-parameter-action="detect">
+ <parameter name="Authorization" action="prevent" />
+ <parameter name="User-Agent" action="ignore" />
+ <parameter name="Host" action="ignore" />
+ <parameter name="Referrer" action="ignore" />
+</validate-parameters>
+```
+
+### Elements
+
+| Name | Description | Required |
+| | | -- |
+| validate-parameters | Root element. Specifies default validation actions for all parameters in requests. | Yes |
+| headers | Add this element to override default validation actions for header parameters in requests. | No |
+| query | Add this element to override default validation actions for query parameters in requests. | No |
+| path | Add this element to override default validation actions for URL path parameters in requests. | No |
+| parameter | Add one or more elements for named parameters to override higher-level configuration of the validation actions. | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| -- | - | -- | - |
+| specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
+| unspecified-parameter-action | [Action](#actions) to perform for request parameters that are not specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| name | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A |
+| action | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound
+
+- **Policy scopes:** all scopes
+
+## Validate headers
+
+The `validate-headers` policy validates the response headers against the API schema.
+
+> [!IMPORTANT]
+> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-headers` policy might not work. You may need to reimport your API using management API version `2021-01-01-preview` or later.
+
+### Policy statement
+
+```xml
+<validate-headers specified-header-action="ignore|prevent|detect" unspecified-header-action="ignore|prevent|detect" errors-variable-name="variable name">
+ <header name="header name" action="ignore|prevent|detect" />
+</validate-headers>
+```
+
+### Example
+
+```xml
+<validate-headers specified-header-action="ignore" unspecified-header-action="prevent" errors-variable-name="responseHeadersValidation" />
+```
+### Elements
+
+| Name | Description | Required |
+| | | -- |
+| validate-headers | Root element. Specifies default validation actions for all headers in responses. | Yes |
+| header | Add one or more elements for named headers to override the default validation actions for headers in responses. | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| -- | - | -- | - |
+| specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A |
+| unspecified-header-action | [Action](#actions) to perform for response headers that are not specified in the API schema. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| name | Name of the header to override validation action for. This value is case insensitive. | Yes | N/A |
+| action | [Action](#actions) to perform for header with the matching name. If the header is specified in the API schema, this value overrides value of `specified-header-action` in the `validate-headers` element. Otherwise, it overrides value of `unspecified-header-action` in the validate-headers element. | Yes | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** outbound, on-error
+
+- **Policy scopes:** all scopes
+
+## Validate status code
+
+The `validate-status-code` policy validates the HTTP status codes in responses against the API schema. This policy may be used to prevent leakage of backend errors, which can contain stack traces.
+
+### Policy statement
+
+```xml
+<validate-status-code unspecified-status-code-action="ignore|prevent|detect" errors-variable-name="variable name">
+ <status-code code="HTTP status code number" action="ignore|prevent|detect" />
+</validate-status-code>
+```
+
+### Example
+
+```xml
+<validate-status-code unspecified-status-code-action="prevent" errors-variable-name="responseStatusCodeValidation" />
+```
+
+### Elements
+
+| Name | Description | Required |
+| | | -- |
+| validate-status-code | Root element. | Yes |
+| status-code | Add one or more elements for HTTP status codes to override the default validation action for status codes in responses. | No |
+
+### Attributes
+
+| Name | Description | Required | Default |
+| -- | - | -- | - |
+| unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that are not specified in the API schema. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| code | HTTP status code to override validation action for. | Yes | N/A |
+| action | [Action](#actions) to perform for the matching status code, which is not specified in the API schema. If the status code is specified in the API schema, this override does not take effect. | Yes | N/A |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** outbound, on-error
+
+- **Policy scopes:** all scopes
++
+## Validation errors
+The following table lists all possible errors of the validation policies.
+
+* **Details** - Can be used to investigate errors. Not meant to be shared publicly.
+* **Public response** - Error returned to the client. Does not leak implementation details.
+
+| **Name** | **Type** | **Validation rule** | **Details** | **Public response** | **Action** |
+|-|-||||-|
+| **validate-content** | | | | | |
+| |RequestBody | SizeLimit | Request's body is {size} bytes long and it exceeds the configured limit of {maxSize} bytes. | Request's body is {size} bytes long and it exceeds the limit of {maxSize} bytes. | detect / prevent |
+||ResponseBody | SizeLimit | Response's body is {size} bytes long and it exceeds the configured limit of {maxSize} bytes. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {messageContentType} | RequestBody | Unspecified | Unspecified content type {messageContentType} is not allowed. | Unspecified content type {messageContentType} is not allowed. | detect / prevent |
+| {messageContentType} | ResponseBody | Unspecified | Unspecified content type {messageContentType} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| | ApiSchema | | API's schema does not exist or it could not be resolved. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| | ApiSchema | | API's schema does not specify definitions. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {messageContentType} | RequestBody / ResponseBody | MissingDefinition | API's schema does not contain definition {definitionName}, which is associated with the content type {messageContentType}. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {messageContentType} | RequestBody | IncorrectMessage | Body of the request does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | Body of the request does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | detect / prevent |
+| {messageContentType} | ResponseBody | IncorrectMessage | Body of the response does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| | RequestBody | ValidationException | Body of the request cannot be validated for the content type {messageContentType}.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| | ResponseBody | ValidationException | Body of the response cannot be validated for the content type {messageContentType}.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| **validate-parameter / validate-headers** | | | | | |
+| {paramName} / {headerName} | QueryParameter / PathParameter / RequestHeader | Unspecified | Unspecified {path parameter / query parameter / header} {paramName} is not allowed. | Unspecified {path parameter / query parameter / header} {paramName} is not allowed. | detect / prevent |
+| {headerName} | ResponseHeader | Unspecified | Unspecified header {headerName} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| |ApiSchema | | API's schema doesn't exist or it couldn't be resolved. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| | ApiSchema | | API schema does not specify definitions. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {paramName} | QueryParameter / PathParameter / RequestHeader / ResponseHeader | MissingDefinition | API's schema does not contain definition {definitionName}, which is associated with the {query parameter / path parameter / header} {paramName}. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {paramName} | QueryParameter / PathParameter / RequestHeader | IncorrectMessage | Request cannot contain multiple values for the {query parameter / path parameter / header} {paramName}. | Request cannot contain multiple values for the {query parameter / path parameter / header} {paramName}. | detect / prevent |
+| {headerName} | ResponseHeader | IncorrectMessage | Response cannot contain multiple values for the header {headerName}. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {paramName} | QueryParameter / PathParameter / RequestHeader | IncorrectMessage | Value of the {query parameter / path parameter / header} {paramName} does not conform to the definition.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The value of the {query parameter / path parameter / header} {paramName} does not conform to the definition.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | detect / prevent |
+| {headerName} | ResponseHeader | IncorrectMessage | Value of the header {headerName} does not conform to the definition.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {paramName} | QueryParameter / PathParameter / RequestHeader | IncorrectMessage | Value of the {query parameter / path parameter / header} {paramName} cannot be parsed according to the definition. <br/><br/>{ex.Message} | Value of the {query parameter / path parameter / header} {paramName} couldn't be parsed according to the definition. <br/><br/>{ex.Message} | detect / prevent |
+| {headerName} | ResponseHeader | IncorrectMessage | Value of the header {headerName} couldn't be parsed according to the definition. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {paramName} | QueryParameter / PathParameter / RequestHeader | ValidationError | {Query parameter / Path parameter / Header} {paramName} cannot be validated.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| {headerName} | ResponseHeader | ValidationError | Header {headerName} cannot be validated.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
+| **validate-status-code** | | | | | |
+| {status-code} | StatusCode | Unspecified | Response status code {status-code} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
++
+The following table lists all the possible Reason values of a validation error along with possible Message values:
+
+| **Reason** | **Message** |
+|||
+| Bad request | {Details} for context variable, {Public response} for client|
+| Response not allowed | {Details} for context variable, {Public response} for client |
++++++
+## Next steps
+
+For more information about working with policies, see:
+
+- [Policies in API Management](api-management-howto-policies.md)
+- [Transform APIs](transform-api.md)
+- [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings
+- [Policy samples](./policy-reference.md)
+- [Error handling](./api-management-error-handling-policies.md)
app-service App Service Web App Cloning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-app-cloning.md
Here are the known restrictions of app cloning:
* Database content is not cloned * Outbound IP Addresses changes if cloning to a different scale unit * Not available for Linux Apps
+* Managed Identities are not cloned
### References * [App Service Cloning](app-service-web-app-cloning.md)
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/private-endpoint.md
Slots cannot use Private Endpoint.
Remote Debugging functionality is not available when Private Endpoint is enabled for the Web App. The recommendation is to deploy the code to a slot and remote debug it there.
+FTP access is provided through the inbound public IP address. Private Endpoint does not support FTP access to the Web App.
+ We are improving Private Link feature and Private Endpoint regularly, check [this article][pllimitations] for up-to-date information about limitations. ## Next steps
automanage Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/common-errors.md
Automanage may fail to onboard a machine onto the service. This document explain
## Troubleshooting deployment failures Onboarding a machine to Automanage will result in an Azure Resource Manager deployment being created. If onboarding fails, it may be helpful to consult the deployment for further details as to why it failed. There are links to the deployments in the failure detail flyout, pictured below. ### Check the deployments for the resource group containing the failed VM The failure flyout will contain a link to the deployments within the resource group that contains the machine that failed onboarding and a prefix name you can use to filter deployments with. Clicking the link will take you to the deployments blade, where you can then filter deployments to see Automanage deployments to your machine. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
Error | Mitigation
:--|:-| Automanage account insufficient permissions error | This may happen if you have recently moved a subscription containing a new Automanage Account into a new tenant. Steps to resolve this are located [here](./repair-automanage-account.md). Workspace region not matching region mapping requirements | Automanage was unable to onboard your machine but the Log Analytics workspace that the machine is currently linked to is not mapped to a supported Automation region. Ensure that your existing Log Analytics workspace and Automation account are located in a [supported region mapping](../automation/how-to/region-mappings.md).
+"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](https://docs.microsoft.com/azure/role-based-access-control/deny-assignments) was created on your resource which prevented Automanage from accessing your resource. This may have been caused by either a [Blueprint](https://docs.microsoft.com/azure/governance/blueprints/concepts/resource-locking) or a [Managed Application](https://docs.microsoft.com/azure/azure-resource-manager/managed-applications/overview).
"The assignment has failed; there is no additional information available" | Please open a case with Microsoft Azure support. ## Next steps
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-create-composite.md
you can increment the version and add release notes each time you make changes
and publish it to your own [PowerShellGet repository](https://powershellexplained.com/2018-03-03-Powershell-Using-a-NuGet-server-for-a-PSRepository/?utm_source=blog&utm_medium=blog&utm_content=psscriptrepo).
-Once you have create a composite resource module containing your configuration
+Once you have created a composite resource module containing your configuration
(or multiple configurations), you can use them in the [Composable Authoring Experience](./compose-configurationwithcompositeresources.md)
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
description: To create highly available and resilient applications in Azure, Ava
Previously updated : 01/26/2021 Last updated : 03/16/2021
To achieve comprehensive business continuity on Azure, build your application ar
| Americas | Europe | Africa | Asia Pacific | |--|-||-| | | | | |
-| Canada Central | France Central | South Africa North* | Japan East |
-| Central US | Germany West Central | | Southeast Asia |
-| East US | North Europe | | Australia East |
-| East US 2 | UK South | | |
-| South Central US | West Europe | | |
-| US Gov Virginia | | | |
+| Brazil South | France Central | South Africa North* | Japan East |
+| Canada Central | Germany West Central | | Southeast Asia |
+| Central US | North Europe | | Australia East |
+| East US | UK South | | |
+| East US 2 | West Europe | | |
+| South Central US | | | |
+| US Gov Virginia | | | |
| West US 2 | | | |
To achieve comprehensive business continuity on Azure, build your application ar
| Azure Database for MySQL ΓÇô Flexible Server | :large_blue_diamond: | | Azure Database for PostgreSQL ΓÇô Flexible Server | :large_blue_diamond: | | Azure DDoS Protection | :large_blue_diamond: |
+| Azure Disk Encryption | :large_blue_diamond: |
| Azure Firewall | :large_blue_diamond: | | Azure Firewall Manager | :large_blue_diamond: | | Azure Kubernetes Service (AKS) | :large_blue_diamond: |
To achieve comprehensive business continuity on Azure, build your application ar
| Azure SQL: Virtual Machine | :large_blue_diamond: | | Azure Search | :large_blue_diamond: | | Azure Web Application Firewall | :large_blue_diamond: |
-| Cognitive
| Container Registry | :large_blue_diamond: | | Event Grid | :large_blue_diamond: | | Network Watcher | :large_blue_diamond: |
To achieve comprehensive business continuity on Azure, build your application ar
| Azure Advisor | :globe_with_meridians: | | Azure Blueprints | :globe_with_meridians: | | Azure Bot Services | :globe_with_meridians: |
+| Azure Front Door | :globe_with_meridians: |
| Azure Defender for IoT | :globe_with_meridians: | | Azure Front Door | :globe_with_meridians: | | Azure Information Protection | :globe_with_meridians: |
azure-arc Create Data Controller Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-azure-data-studio.md
Follow these steps to create an Azure Arc data controller using the Deployment w
1. Choose the desired subscription and resource group. 1. Select an Azure location.
- The Azure location selected here is the location in Azure where the *metadata* about the data controller and the database instances that it manages will be stored. The data controller and database instances will be actually crewted in your Kubernetes cluster wherever that may be.
+ The Azure location selected here is the location in Azure where the *metadata* about the data controller and the database instances that it manages will be stored. The data controller and database instances will be actually created in your Kubernetes cluster wherever that may be.
10. Select the appropriate Connectivity Mode. Learn more on [Connectivity modes](./connectivity.md). **Click Next**.
kubectl describe po/<pod name> --namespace arc
## Troubleshooting creation problems
-If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
+If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Last updated 02/08/2021
Azure Cache for Redis offers Redis persistence using the following models:
-* **RDB persistence** - When RDB (Redis database) persistence is configured, Azure Cache for Redis persists a snapshot of the Azure Cache for Redis in a Redis binary format to disk based on a configurable backup frequency. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence.
+* **RDB persistence** - When RDB (Redis database) persistence is configured, Azure Cache for Redis persists a snapshot of the Azure Cache for Redis in a Redis binary format to disk (in an Azure Storage account) based on a configurable backup frequency. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence.
* **AOF persistence** - When AOF (Append only file) persistence is configured, Azure Cache for Redis saves every write operation to a log that is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence. Persistence writes Redis data into an Azure Storage account that you own and manage. You can configure from the **New Azure Cache for Redis** blade during cache creation and on the **Resource menu** for existing premium caches.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
You can scale to a different pricing tier with the following restrictions:
* You can't scale from a **Standard** cache down to a **Basic** cache. * You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a subsequent scaling operation to the desired size. * You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a subsequent scaling operation.
-* You can't scale from a larger size down to the **C0 (250 MB)** size.
+* You can't scale from a larger size down to the **C0 (250 MB)** size. However, you can scale down to any other size within the same pricing tier. For example, you can scale down from C5 Standard to C1 Standard.
While the cache is scaling to the new pricing tier, a **Scaling** status is displayed in the **Azure Cache for Redis** blade.
In the Azure portal, you can see the scaling operation in progress. When scaling
[redis-cache-pricing-tier-blade]: ./media/cache-how-to-scale/redis-cache-pricing-tier-blade.png
-[redis-cache-scaling]: ./media/cache-how-to-scale/redis-cache-scaling.png
+[redis-cache-scaling]: ./media/cache-how-to-scale/redis-cache-scaling.png
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
While the full middleware registration set of APIs is not yet exposed, we do sup
Bindings are defined by using attributes on methods, parameters, and return types. A function method is a method with a `Function` and a trigger attribute applied to an input parameter, as shown in the following example: The trigger attribute specifies the trigger type and binds input data to a method parameter. The previous example function is triggered by a queue message, and the queue message is passed to the method in the `myQueueItem` parameter.
A function can have zero or more input bindings that can pass data to a function
To write to an output binding, you must apply an output binding attribute to the function method, which defined how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `functiontesting2` by using an output binding: ### Multiple output bindings
Likewise, the function returns an `HttpReponseData` object, which provides data
The following code is an HTTP trigger ## Logging
In .NET isolated, you can write to logs by using an [`ILogger`](/dotnet/api/micr
The following example shows how to get an `ILogger` and write logs inside a function: Use various methods of `ILogger` to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-sources-iis-logs.md
IIS log records have a type of **W3CIISLog** and have the properties in the foll
| sSiteName |Name of the IIS site. | | TimeGenerated |Date and time the entry was logged. | | TimeTaken |Length of time to process the request in milliseconds. |
+| csHost | Host name. |
+| csBytes | Number of bytes that the server received. |
## Log queries with IIS logs The following table provides different examples of log queries that retrieve IIS log records.
The following table provides different examples of log queries that retrieve IIS
## Next steps * Configure Azure Monitor to collect other [data sources](../agents/agent-data-sources.md) for analysis.
-* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
+* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-definition.md
Action groups provide a modular and reusable way to trigger actions for your Azu
## Define a template
-Certain work item types can use templates that you define in the ITSM tool. By using templates, you can define fields that will be automatically populated according to fixed values for an action group. You can define which template you want to use as a part of the definition of an action group.
+Certain work item types can use templates that you define in the ITSM tool. By using templates, you can define fields that will be automatically populated according to fixed values for an action group. You can define which template you want to use as a part of the definition of an action group. You can find in ServiceNow docs information about how to create templates - (here)[https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html].
To create an action group:
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Ensure that you've met the following prerequisites:
1. Use the link https://(instance name).service-now.com/api/sn_em_connector/em/inbound_event?source=azuremonitor the URI for the secure export definition. 2. Follow the instructions according to the version:
- * [Paris](https://docs.servicenow.com/bundle/paris-it-operations-management/page/product/event-management/task/azure-events-authentication.html)
- * [Orlando](https://docs.servicenow.com/bundle/orlando-it-operations-management/page/product/event-management/task/azure-events-authentication.html)
- * [New York](https://docs.servicenow.com/bundle/newyork-it-operations-management/page/product/event-management/task/azure-events-authentication.html)
+ * [Paris](https://docs.servicenow.com/bundle/paris-it-operations-management/page/product/event-management/concept/azure-integration.html)
+ * [Orlando](https://docs.servicenow.com/bundle/orlando-it-operations-management/page/product/event-management/concept/azure-integration.html)
+ * [New York](https://docs.servicenow.com/bundle/newyork-it-operations-management/page/product/event-management/concept/azure-integration.html)
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Follow these instructions to capture troubleshooting logs for your framework.
```xml <TelemetryModules>
- <Add Type="Microsoft.ApplicationInsights.Extensibility.HostingStartup.FileDiagnosticsTelemetryModule, Microsoft.AspNet.ApplicationInsights.HostingStartup">
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.Implementation.Tracing.FileDiagnosticsTelemetryModule, Microsoft.ApplicationInsights">
<Severity>Verbose</Severity> <LogFileName>mylog.txt</LogFileName> <LogFilePath>C:\\SDKLOGS</LogFilePath>
azure-monitor Visual Studio Trends https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/visual-studio-trends.md
- Title: Analyzing Trends in Visual Studio | Microsoft Docs
-description: Analyze, visualize, and explore trends in your Application Insights telemetry in Visual Studio.
- Previously updated : 03/17/2017---
-# Analyzing Trends in Visual Studio
-The Application Insights Trends tool visualizes how your web application's important telemetry events change over time, helping you quickly identify problems and anomalies. By linking you to more detailed diagnostic information, Trends can help you improve your app's performance, track down the causes of exceptions, and uncover insights from your custom events.
-
-![Example Trends window](./media/visual-studio-trends/app-insights-trends-hero-750.png)
-
-## Configure your web app for Application Insights
-
-If you haven't done this already, [configure your web app for Application Insights](./app-insights-overview.md). This allows it to send telemetry to the Application Insights portal. The Trends tool reads the telemetry from there.
-
-Application Insights Trends is available in Visual Studio 2015 Update 3 and later.
-
-## Open Application Insights Trends
-To open the Application Insights Trends window:
-
-* From the Application Insights toolbar button, choose **Explore Telemetry Trends**, or
-* From the project context menu, choose **Application Insights > Explore Telemetry Trends**, or
-* From the Visual Studio menu bar, choose **View > Other Windows > Application Insights Trends**.
-
-You may see a prompt to select a resource. Click **Select a resource**, sign in with an Azure subscription, then choose an Application Insights resource from the list for which you'd like to analyze telemetry trends.
-
-## Choose a trend analysis
-![Menu of common types of trend analysis](./media/visual-studio-trends/app-insights-trends-1-750.png)
-
-Get started by choosing from one of five common trend analyses, each analyzing data from the last 24 hours:
-
-* **Investigate performance issues with your server requests** - Requests made to your service, grouped by response times
-* **Analyze errors in your server requests** - Requests made to your service, grouped by HTTP response code
-* **Examine the exceptions in your application** - Exceptions from your service, grouped by exception type
-* **Check the performance of your application's dependencies** - Services called by your service, grouped by response times
-* **Inspect your custom events** - Custom events you've set up for your service, grouped by event type.
-
-These pre-built analyses are available later from the **View common types of telemetry analysis** button in the upper-left corner of the Trends window.
-
-## Visualize trends in your application
-Application Insights Trends creates a time series visualization from your app's telemetry. Each time series visualization displays one type of telemetry, grouped by one property of that telemetry, over some time range. For example, you might want to view server requests, grouped by the country/region from which they originated, over the last 24 hours. In this example, each bubble on the visualization would represent a count of the server requests for some country/region during one hour.
-
-Use the controls at the top of the window to adjust what types of telemetry you view. First, choose the telemetry types in which you're interested:
-
-* **Telemetry Type** - Server requests, exceptions, dependencies, or custom events
-* **Time Range** - Anywhere from the last 30 minutes to the last 3 days
-* **Group By** - Exception type, problem ID, country/region, and more.
-
-Then, click **Analyze Telemetry** to run the query.
-
-To navigate between bubbles in the visualization:
-
-* Click to select a bubble, which updates the filters at the bottom of the window, summarizing just the events that occurred during a specific time period
-* Double-click a bubble to navigate to the Search tool and see all of the individual telemetry events that occurred during that time period
-* Ctrl-click a bubble to de-select it in the visualization.
-
-> [!TIP]
-> The Trends and Search tools work together to help you pinpoint the causes of issues in your service among thousands of telemetry events. For example, if one afternoon your customers notice your app is being less responsive, start with Trends. Analyze requests made to your service over the past several hours, grouped by response time. See if there's an unusually large cluster of slow requests. Then double click that bubble to go to the Search tool, filtered to those request events. From Search, you can explore the contents of those requests and navigate to the code involved to resolve the issue.
->
->
-
-## Filter
-Discover more specific trends with the filter controls at the bottom of the window. To apply a filter, click on its name. You can quickly switch between different filters to discover trends that may be hiding in a particular dimension of your telemetry. If you apply a filter in one dimension, like Exception Type, filters in other dimensions remain clickable even though they appear grayed-out. To un-apply a filter, click it again. Ctrl-click to select multiple filters in the same dimension.
-
-![Trend filters](./media/visual-studio-trends/TrendsFiltering-750.png)
-
-What if you want to apply multiple filters?
-
-1. Apply the first filter.
-2. Click the **Apply selected filters and query again** button by the name of the dimension of your first filter. This will re-query your telemetry for only events that match the first filter.
-3. Apply a second filter.
-4. Repeat the process to find trends in specific subsets of your telemetry. For example, server requests named "GET Home/Index" *and* that came from Germany *and* that received a 500 response code.
-
-To un-apply one of these filters, click the **Remove selected filters and query again** button for the dimension.
-
-![Multiple filters](./media/visual-studio-trends/TrendsFiltering2-750.png)
-
-## Find anomalies
-The Trends tool can highlight bubbles of events that are anomalous compared to other bubbles in the same time series. In the View Type dropdown, choose **Counts in time bucket (highlight anomalies)** or **Percentages in time bucket (highlight anomalies)**. Red bubbles are anomalous. Anomalies are defined as bubbles with counts/percentages exceeding 2.1 times the standard deviation of the counts/percentages that occurred in the past two time periods (48 hours if you're viewing the last 24 hours, etc.).
-
-![Colored dots indicate anomalies](./media/visual-studio-trends/TrendsAnomalies-750.png)
-
-> [!TIP]
-> Highlighting anomalies is especially helpful for finding outliers in time series of small bubbles that may otherwise look similarly sized.
->
->
-
-## <a name="next"></a>Next steps
-* **[Working with Application Insights in Visual Studio](./visual-studio.md)**. Search telemetry, see data in CodeLens, and configure Application Insights. All within Visual Studio.
-* **[Working with the Application Insights portal](./overview-dashboard.md)**. Dashboards, powerful diagnostic and analytic tools, alerts, a live dependency map of your application, and telemetry export.
-
azure-monitor Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/visual-studio.md
In the Code Lens line above each handler method, you see a count of the requests
[More about Application Insights in Code Lens](./visual-studio-codelens.md)
-## Trends
-Trends is a tool for visualizing how your app behaves over time.
-
-Choose **Explore Telemetry Trends** from the Application Insights toolbar button or Application Insights Search window. Choose one of five common queries to get started. You can analyze different datasets based on telemetry types, time ranges, and other properties.
-
-To find anomalies in your data, choose one of the anomaly options under the "View Type" dropdown. The filtering options at the bottom of the window make it easy to hone in on specific subsets of your telemetry.
-
-![Trends](./media/visual-studio/51.png)
-
-[More about Trends](./visual-studio-trends.md).
- ## Local monitoring (From Visual Studio 2015 Update 2) If you haven't configured the SDK to send telemetry to the Application Insights portal (so that there is no instrumentation key in ApplicationInsights.config) then the diagnostics window displays telemetry from your latest debugging session.
azure-monitor Log Standard Columns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-standard-columns.md
Use these `union withsource = tt *` queries sparingly as scans across data types
It is always more efficient to use the \_SubscriptionId column than extracting it by parsing the \_ResourceId column.
-## \_SubstriptionId
+## \_SubscriptionId
The **\_SubscriptionId** column holds the subscription ID of the resource that the record is associated with. This gives you a standard column to use to scope your query to only records from a particular subscription, or to compare different subscriptions. For Azure resources, the value of **__SubscriptionId** is the subscription part of the [Azure resource ID URL](../../azure-resource-manager/templates/template-functions-resource.md). The column is limited to Azure resources, including [Azure Arc](../../azure-arc/overview.md) resources, or to custom logs that indicated the Resource ID during ingestion.
azure-portal Azure Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-overview.md
Title: Azure portal overview description: The Azure portal is a graphical user interface that you can use to manage your Azure services. Learn how to navigate and find resources in the Azure portal. keywords: portal Previously updated : 12/20/2019 Last updated : 03/12/2021
If you choose docked mode for the portal menu, it will always be visible. You ca
## Azure Home
-As a new subscriber to Azure services, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Azure Home**. This page compiles resources that help you get the most from your Azure subscription. We have included links to free online courses, documentation, core services, and useful sites for staying current and managing change for your organization. For quick and easy access to work in progress, we also show a list of your most recently visited resources. You canΓÇÖt customize this page, but you can choose whether to see **Azure Home** or **Azure Dashboard** as your default view. The first time you sign in, thereΓÇÖs a prompt at the top of the page where you can save your preference.
+As a new subscriber to Azure services, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Azure Home**. This page compiles resources that help you get the most from your Azure subscription. We have included links to free online courses, documentation, core services, and useful sites for staying current and managing change for your organization. For quick and easy access to work in progress, we also show a list of your most recently visited resources. You can't customize this page, but you can choose whether to see **Azure Home** or **Azure Dashboard** as your default view. The first time you sign in, there's a prompt at the top of the page where you can save your preference.
![Screenshot showing where to save your preference.](./media/azure-portal-overview/azure-portal-default-view.png)
Both the Azure portal menu and the Azure default view can be changed in **Portal
## Azure Dashboard
-Dashboards provide a focused view of the resources in your subscription that matter most to you. WeΓÇÖve given you a default dashboard to get you started. You can customize this dashboard to bring the resources you use frequently into a single view. Any changes you make to the default view affect your experience only. However, you can create additional dashboards for your own use or publish your customized dashboards and share them with other users in your organization. For more information, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+Dashboards provide a focused view of the resources in your subscription that matter most to you. We've given you a default dashboard to get you started. You can customize this dashboard to bring the resources you use frequently into a single view. Any changes you make to the default view affect your experience only. However, you can create additional dashboards for your own use or publish your customized dashboards and share them with other users in your organization. For more information, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
## Getting around the portal
-ItΓÇÖs helpful to understand the basic portal layout and how to interact with it. Here, weΓÇÖll introduce the components of the user interface and some of the terminology we use to give instructions. For a more detailed tour of the portal, see the course lesson [Navigate the portal](/learn/modules/tour-azure-portal/3-navigate-the-portal).
+It's helpful to understand the basic portal layout and how to interact with it. Here, we'll introduce the components of the user interface and some of the terminology we use to give instructions. For a more detailed tour of the portal, see the course lesson [Navigate the portal](/learn/modules/tour-azure-portal/3-navigate-the-portal).
-The Azure portal menu and page header are global elements that are always present. These persistent features are the ΓÇ£shellΓÇ¥ for the user interface associated with each individual service or feature and the header provides access to global controls. The configuration page (sometimes referred to as a ΓÇ£bladeΓÇ¥) for a resource may also have a resource menu to help you move between features.
+The Azure portal menu and page header are global elements that are always present. These persistent features are the "shell" for the user interface associated with each individual service or feature and the header provides access to global controls. The configuration page (sometimes referred to as a "blade") for a resource may also have a resource menu to help you move between features.
The figure below labels the basic elements of the Azure portal, each of which are described in the following table.
The figure below labels the basic elements of the Azure portal, each of which ar
## Get started with services
-If youΓÇÖre a new subscriber, youΓÇÖll have to create a resource before thereΓÇÖs anything to manage. Select **+ Create a resource** to view the services available in the Azure Marketplace. YouΓÇÖll find applications and services from hundreds of providers here, all certified to run on Azure.
+If you're a new subscriber, you'll have to create a resource before there's anything to manage. Select **+ Create a resource** to view the services available in the Azure Marketplace. You'll find hundreds of applications and services from many providers here, all certified to run on Azure.
We pre-populated your Favorites in the sidebar with links to commonly used services. To view all available services, select **All services** from the sidebar.
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quick-create-template.md
Title: Create an Azure portal dashboard by using an Azure Resource Manager templ
description: Learn how to create an Azure portal dashboard by using an Azure Resource Manager template. Previously updated : 06/15/2020 Last updated : 03/15/2021 # Quickstart: Create a dashboard in the Azure portal by using an ARM template
If your environment meets the prerequisites and you're familiar with using ARM t
The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps.
-1. In the Azure portal, select Cloud Shell.
+1. In the Azure portal, select **Cloud Shell**.
![Select Cloud shell from the Azure portal ribbon](media/quick-create-template/cloud-shell.png)
+1. In the **Cloud Shell** window, select **PowerShell**.
+
+ ![Select PowerShell in the terminal window](media/quick-create-template/powershell.png)
+ 1. Copy the following command and enter it at the command prompt to create a resource group. ```powershell
The Azure portal was used to deploy the template. In addition to the Azure porta
## Review deployed resources
-Check that the dashboard was created successfully and that you can see data from the VM.
-
-1. In the Azure portal, select **Dashboard**.
-
- ![Azure portal navigation to dashboard](media/quick-create-template/navigate-to-dashboards.png)
-
-1. On the dashboard page, select **Simple VM Dashboard**.
-
- ![Navigate to Simple VM Dashboard](media/quick-create-template/select-simple-vm-dashboard.png)
-
-1. Review the dashboard that the ARM template created. You can see that some of the content is static, but there are also charts that show the performance of the VM you created at the beginning.
-
- ![Review Simple VM Dashboard](media/quick-create-template/review-simple-vm-dashboard.png)
## Clean up resources
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md
Last updated 12/4/2020
# Quickstart: Create an Azure portal dashboard with Azure CLI
-A dashboard in the Azure portal is a focused and organized view of your cloud resources.
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This
+article focuses on the process of using Azure CLI to create a dashboard.
+The dashboard shows the performance of a virtual machine (VM), as well as some static information
+and links.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
az portal dashboard update --resource-group myResourceGroup --name 'Simple VM Da
--input-path portal-dashboard-template-testvm.json --location centralus ```
-Verify that you can see data about the virtual machine from within the Azure portal.
-
-1. In the Azure portal, select **Dashboard**.
-
- ![Azure portal navigation to dashboard](media/quickstart-portal-dashboard-powershell/navigate-to-dashboards.png)
-
-1. On the dashboard page, select **Simple VM Dashboard**.
-
- ![Navigate to Simple VM Dashboard](media/quickstart-portal-dashboard-powershell/select-simple-vm-dashboard.png)
-
-1. Review the dashboard. You can see that some of the content is static, but there are also charts
- that show the performance of the VM.
-
- ![Review Simple VM Dashboard](media/quickstart-portal-dashboard-powershell/review-simple-vm-dashboard.png)
## Clean up resources
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-powershell.md
cmdlet. For more information about installing the Az PowerShell module, see
[Install Azure PowerShell](/powershell/azure/install-az-ps). > [!IMPORTANT]
-> While the **Az.Portal** PowerShell module is in preview, you must install it separately from from
+> While the **Az.Portal** PowerShell module is in preview, you must install it separately from
> the Az PowerShell module using the `Install-Module` cmdlet. Once this PowerShell module becomes > generally available, it becomes part of future Az PowerShell module releases and available > natively from within Azure Cloud Shell.
Check that the dashboard was created successfully.
Get-AzPortalDashboard -Name $dashboardName -ResourceGroupName $resourceGroupName ```
-Verify that you can see data about the VM from within the Azure portal.
-
-1. In the Azure portal, select **Dashboard**.
-
- ![Azure portal navigation to dashboard](media/quickstart-portal-dashboard-powershell/navigate-to-dashboards.png)
-
-1. On the dashboard page, select **Simple VM Dashboard**.
-
- ![Navigate to Simple VM Dashboard](media/quickstart-portal-dashboard-powershell/select-simple-vm-dashboard.png)
-
-1. Review the dashboard. You can see that some of the content is static, but there are also charts
- that show the performance of the VM.
-
- ![Review Simple VM Dashboard](media/quickstart-portal-dashboard-powershell/review-simple-vm-dashboard.png)
## Clean up resources
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
There are some important steps to do before moving a resource. By verifying thes
* [Networking move guidance](./move-limitations/networking-move-limitations.md) * [Recovery Services move guidance](../../backup/backup-azure-move-recovery-services-vault.md?toc=/azure/azure-resource-manager/toc.json) * [Virtual Machines move guidance](./move-limitations/virtual-machines-move-limitations.md)
+ * To move an Azure subscription to a new management group, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
1. If you move a resource that has an Azure role assigned directly to the resource (or a child resource), the role assignment is not moved and becomes orphaned. After the move, you must re-create the role assignment. Eventually, the orphaned role assignment will be automatically removed, but it is a best practice to remove the role assignment before moving the resource.
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
For some resource types, you need to contact support to have the 800 instance li
## Microsoft.DevTestLab
-* labs/virtualMachines - By default, limited to 800 instances.
* schedules ## Microsoft.EnterpriseKnowledgeGraph
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-management-group.md
Title: Deploy resources to management group description: Describes how to deploy resources at the management group scope in an Azure Resource Manager template. Previously updated : 01/13/2021 Last updated : 03/16/2021 # Management group deployments with ARM templates
The next example creates a new management group in the management group specifie
} ```
+To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-subscriptions-in-arm-template)
+ ## Azure Policy Custom policy definitions that are deployed to the management group are extensions of the management group. To get the ID of a custom policy definition, use the [extensionResourceId()](template-functions-resource.md#extensionresourceid) function. Built-in policy definitions are tenant level resources. To get the ID of a built-in policy definition, use the [tenantResourceId()](template-functions-resource.md#tenantresourceid) function.
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
Azure SQL is built upon the familiar SQL Server engine, so you can migrate appli
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your business requirements. Whether you prioritize cost savings or minimal administration, this article can help you decide which approach delivers against the business requirements you care about most. - If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player] -
+> [!TIP]
+> How can we make Azure SQL better? [Take the survey](https://aka.ms/AzureSQLSurvey).
## Overview
azure-sql Sql Data Sync Sync Data Between Sql Databases Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/sql-data-sync-sync-data-between-sql-databases-rest-api.md
+
+ Title: "REST API: Sync between multiple databases"
+description: Use a REST API example script to sync between multiple databases.
++++
+ms.devlang: REST API
++++ Last updated : 03/12/2019++
+# Use REST API to sync data between multiple databases
++
+This REST API example configures SQL Data Sync to sync data between multiple databases.
+
+For an overview of SQL Data Sync, see [Sync data across multiple cloud and on-premises databases with SQL Data Sync in Azure](../sql-data-sync-data-sql-server-sql-database.md).
+
+> [!IMPORTANT]
+> SQL Data Sync does not support Azure SQL Managed Instance at this time.
+
+## Create sync group
+
+Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncgroups/createorupdate) template to create a sync group.
+
+When creating a sync group, do not pass in the sync schema (table\column) and do not pass in masterSyncMemberName, because at this time sync group does not have table\column information yet.
+
+Sample request for creating a sync group:
+
+```http
+PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187?api-version=2015-05-01-preview
+```
+
+```json
+{
+ "properties": {
+ "interval": -1,
+ "lastSyncTime": "0001-01-01T08:00:00Z",
+ "conflictResolutionPolicy": "HubWin",
+ "syncDatabaseId": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328",
+ "hubDatabaseUserName": "hubUser"
+ }
+}
+```
+
+Sample response for creating a sync group:
+
+Status code: 200
+```json
+{
+ "properties": {
+ "interval": -1,
+ "lastSyncTime": "0001-01-01T08:00:00Z",
+ "conflictResolutionPolicy": "HubWin",
+ "syncDatabaseId": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328",
+ "hubDatabaseUserName": "hubUser",
+ "syncState": "NotReady"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187",
+ "name": "syncgroupcrud-3187",
+ "type": "Microsoft.Sql/servers/databases/syncGroups"
+}
+```
+
+Status code: 201
+```json
+{
+ "properties": {
+ "interval": -1,
+ "lastSyncTime": "0001-01-01T08:00:00Z",
+ "conflictResolutionPolicy": "HubWin",
+ "syncDatabaseId": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328",
+ "hubDatabaseUserName": "hubUser",
+ "syncState": "NotReady"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187",
+ "name": "syncgroupcrud-3187",
+ "type": "Microsoft.Sql/servers/databases/syncGroups"
+}
+```
+
+## Create sync member
+
+Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncmembers/createorupdate) template to create a sync member.
+
+Sample request for creating a sync member:
+
+```http
+PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/syncMembers/syncgroupcrud-4879?api-version=2015-05-01-preview
+```
+
+```json
+{
+ "properties": {
+ "databaseType": "AzureSqlDatabase",
+ "serverName": "syncgroupcrud-3379.database.windows.net",
+ "databaseName": "syncgroupcrud-7421",
+ "userName": "myUser",
+ "syncDirection": "Bidirectional",
+ "syncState": "UnProvisioned"
+ }
+}
+```
+Sample response for creating a sync member:
+
+Status code:200
+```json
+{
+ "properties": {
+ "databaseType": "AzureSqlDatabase",
+ "serverName": "syncgroupcrud-3379.database.windows.net",
+ "databaseName": "syncgroupcrud-7421",
+ "userName": "myUser",
+ "syncDirection": "Bidirectional",
+ "syncState": "UnProvisioned"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/syncMembers/syncgroupcrud-4879",
+ "name": "syncgroupcrud-4879",
+ "type": "Microsoft.Sql/servers/databases/syncGroups/syncMembers"
+}
+```
+
+Status code:201
+```json
+{
+ "properties": {
+ "databaseType": "AzureSqlDatabase",
+ "serverName": "syncgroupcrud-3379.database.windows.net",
+ "databaseName": "syncgroupcrud-7421",
+ "userName": "myUser",
+ "syncDirection": "Bidirectional",
+ "syncState": "UnProvisioned"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/syncMembers/syncgroupcrud-4879",
+ "name": "syncgroupcrud-4879",
+ "type": "Microsoft.Sql/servers/databases/syncGroups/syncMembers"
+}
+```
+
+## Refresh schema
+
+Once your sync group is created successfully, refresh schema using the following templates.
+
+Use the [refresh hub schema](https://docs.microsoft.com/rest/api/sql/syncgroups/refreshhubschema) template to refresh the schema for the hub database.
+
+Sample request for refreshing a hub database schema:
+
+```http
+POST https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/refreshHubSchema?api-version=2015-05-01-preview
+```
+
+Sample response for refreshing a hub database schema:
+
+Status code: 200
+
+Status code: 202
+
+Use the [list hub schemas](https://docs.microsoft.com/rest/api/sql/syncgroups/listhubschemas) template to list the hub database schema.
+
+Use the [refresh member schema](https://docs.microsoft.com/rest/api/sql/syncmembers/refreshmemberschema) template to refresh the member database schema.
+
+Use the [list member schema](https://docs.microsoft.com/rest/api/sql/syncmembers/listmemberschemas) template to list member database schema.
+
+Only proceed to the next step once your schema refreshes successfully.
+
+## Update sync group
+
+Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncgroups/createorupdate) template to update your sync group.
+
+Update sync group by specifying the sync schema. Include your schema and masterSyncMemberName, which is the name that holds the schema you want to use.
+
+Sample request for updating sync group:
+
+```http
+PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187?api-version=2015-05-01-preview
+```
+
+```json
+{
+ "properties": {
+ "interval": -1,
+ "lastSyncTime": "0001-01-01T08:00:00Z",
+ "conflictResolutionPolicy": "HubWin",
+ "syncDatabaseId": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328",
+ "hubDatabaseUserName": "hubUser"
+ }
+}
+```
+
+Sample response for updating sync group:
+
+```json
+{
+ "properties": {
+ "interval": -1,
+ "lastSyncTime": "0001-01-01T08:00:00Z",
+ "conflictResolutionPolicy": "HubWin",
+ "syncDatabaseId": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328",
+ "hubDatabaseUserName": "hubUser",
+ "syncState": "NotReady"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187",
+ "name": "syncgroupcrud-3187",
+ "type": "Microsoft.Sql/servers/databases/syncGroups"
+}
+```
+
+```json
+{
+ "properties": {
+ "interval": -1,
+ "lastSyncTime": "0001-01-01T08:00:00Z",
+ "conflictResolutionPolicy": "HubWin",
+ "syncDatabaseId": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328",
+ "hubDatabaseUserName": "hubUser",
+ "syncState": "NotReady"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-3521/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187",
+ "name": "syncgroupcrud-3187",
+ "type": "Microsoft.Sql/servers/databases/syncGroups"
+}
+```
+## Update sync member
+
+Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncmembers/createorupdate) template to update your sync member.
+
+Sample request for updating a sync member:
+
+```http
+PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/syncMembers/syncgroupcrud-4879?api-version=2015-05-01-preview
+```
+
+```json
+{
+ "properties": {
+ "databaseType": "AzureSqlDatabase",
+ "serverName": "syncgroupcrud-3379.database.windows.net",
+ "databaseName": "syncgroupcrud-7421",
+ "userName": "myUser",
+ "syncDirection": "Bidirectional",
+ "syncState": "UnProvisioned"
+ }
+}
+```
+
+Sample response for updating a sync member:
+
+Status code: 200
+```json
+{
+ "properties": {
+ "databaseType": "AzureSqlDatabase",
+ "serverName": "syncgroupcrud-3379.database.windows.net",
+ "databaseName": "syncgroupcrud-7421",
+ "userName": "myUser",
+ "syncDirection": "Bidirectional",
+ "syncState": "UnProvisioned"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/syncMembers/syncgroupcrud-4879",
+ "name": "syncgroupcrud-4879",
+ "type": "Microsoft.Sql/servers/databases/syncGroups/syncMembers"
+}
+```
+
+Status code: 201
+```json
+{
+ "properties": {
+ "databaseType": "AzureSqlDatabase",
+ "serverName": "syncgroupcrud-3379.database.windows.net",
+ "databaseName": "syncgroupcrud-7421",
+ "userName": "myUser",
+ "syncDirection": "Bidirectional",
+ "syncState": "UnProvisioned"
+ },
+ "id": "/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/syncMembers/syncgroupcrud-4879",
+ "name": "syncgroupcrud-4879",
+ "type": "Microsoft.Sql/servers/databases/syncGroups/syncMembers"
+}
+```
+
+## Trigger sync
+
+Use the [trigger sync](https://docs.microsoft.com/rest/api/sql/syncgroups/triggersync) template to trigger a sync operation.
+
+Sample request for triggering sync operation:
+
+```http
+POST https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/syncgroupcrud-65440/providers/Microsoft.Sql/servers/syncgroupcrud-8475/databases/syncgroupcrud-4328/syncGroups/syncgroupcrud-3187/triggerSync?api-version=2015-05-01-preview
+```
+
+Sample response for triggering sync operation:
+
+Status code: 200
+
+## Next steps
+
+For more information about Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
+
+Additional SQL Database PowerShell script samples can be found in [Azure SQL Database PowerShell scripts](../powershell-script-content-guide.md).
+
+For more information about SQL Data Sync, see:
+
+- Overview - [Sync data across multiple cloud and on-premises databases with SQL Data Sync in Azure](../sql-data-sync-data-sql-server-sql-database.md)
+- Set up Data Sync
+ - Use the Azure portal - [Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL Server](../sql-data-sync-sql-server-configure.md)
+ - Use PowerShell - [Use PowerShell to sync data between a database in Azure SQL Database and SQL Server](sql-data-sync-sync-data-between-azure-onprem.md)
+- Data Sync Agent - [Data Sync Agent for SQL Data Sync in Azure](../sql-data-sync-agent-overview.md)
+- Best practices - [Best practices for SQL Data Sync in Azure](../sql-data-sync-best-practices.md)
+- Monitor - [Monitor SQL Data Sync with Azure Monitor logs](../monitor-tune-overview.md)
+- Troubleshoot - [Troubleshoot issues with SQL Data Sync in Azure](../sql-data-sync-troubleshoot.md)
+- Update the sync schema
+ - Use Transact-SQL - [Automate the replication of schema changes in SQL Data Sync in Azure](../sql-data-sync-update-sync-schema.md)
+ - Use PowerShell - [Use PowerShell to update the sync schema in an existing sync group](update-sync-schema-in-sync-group.md)
+
+For more information about SQL Database, see:
+
+- [SQL Database overview](../sql-database-paas-overview.md)
+- [Database Lifecycle Management](/previous-versions/sql/sql-server-guides/jj907294(v=sql.110))
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Before setting up the private link, read the [general requirements](sql-data-syn
- [Use PowerShell to sync between multiple databases in Azure SQL Database](scripts/sql-data-sync-sync-data-between-sql-databases.md) - [Use PowerShell to sync between a database in Azure SQL Database and a databases in a SQL Server instance](scripts/sql-data-sync-sync-data-between-azure-onprem.md)
+### Set up Data Sync with REST API
+- [Use REST API to sync between multiple databases in Azure SQL Database](scripts/sql-data-sync-sync-data-between-sql-databases-rest-api.md)
+ ### Review the best practices for Data Sync - [Best practices for Azure SQL Data Sync](sql-data-sync-best-practices.md)
azure-sql Sql Database Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-paas-overview.md
SQL Database enables you to easily define and scale performance within two diffe
If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
+> [!TIP]
+> How can we make Azure SQL better? [Take the survey](https://aka.ms/AzureSQLSurvey).
+ ## Deployment models Azure SQL Database provides the following deployment options for a database:
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
The following diagram outlines key features of SQL Managed Instance:
Azure SQL Managed Instance is designed for customers looking to migrate a large number of apps from an on-premises or IaaS, self-built, or ISV provided environment to a fully managed PaaS cloud environment, with as low a migration effort as possible. Using the fully automated [Azure Data Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md#create-an-azure-database-migration-service-instance), customers can lift and shift their existing SQL Server instance to SQL Managed Instance, which offers compatibility with SQL Server and complete isolation of customer instances with native VNet support. For more information on migration options and tools, see [Migration overview: SQL Server to Azure SQL Managed Instance](../migration-guides/managed-instance/sql-server-to-managed-instance-overview.md).</br> With Software Assurance, you can exchange your existing licenses for discounted rates on SQL Managed Instance using the [Azure Hybrid Benefit for SQL Server](https://azure.microsoft.com/pricing/hybrid-benefit/). SQL Managed Instance is the best migration destination in the cloud for SQL Server instances that require high security and a rich programmability surface.
+> [!TIP]
+> How can we make Azure SQL better? [Take the survey](https://aka.ms/AzureSQLSurvey).
+ ## Key features and capabilities SQL Managed Instance combines the best features that are available both in Azure SQL Database and the SQL Server database engine.
SQL Managed Instance combines the best features that are available both in Azure
| | | |No hardware purchasing and management <br>No management overhead for managing underlying infrastructure <br>Quick provisioning and service scaling <br>Automated patching and version upgrade <br>Integration with other PaaS data services |99.99% uptime SLA <br>Built-in [high availability](../database/high-availability-sla.md) <br>Data protected with [automated backups](../database/automated-backups-overview.md) <br>Customer configurable backup retention period <br>User-initiated [backups](/sql/t-sql/statements/backup-transact-sql?preserve-view=true&view=azuresqldb-mi-current) <br>[Point-in-time database restore](../database/recovery-using-backups.md#point-in-time-restore) capability | |**Security and compliance** | **Management**|
-|Isolated environment ([VNet integration](connectivity-architecture-overview.md), single tenant service, dedicated compute and storage) <br>[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)<br>[Azure Active Directory (Azure AD) authentication](../database/authentication-aad-overview.md), single sign-on support <br> <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current">Azure AD server principals (logins)</a> <br>Adheres to compliance standards same as Azure SQL Database <br>[SQL auditing](auditing-configure.md) <br>[Advanced Threat Protection](threat-detection-configure.md) |Azure Resource Manager API for automating service provisioning and scaling <br>Azure portal functionality for manual service provisioning and scaling <br>Data Migration Service
+|Isolated environment ([VNet integration](connectivity-architecture-overview.md), single tenant service, dedicated compute and storage) <br>[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)<br>[Azure Active Directory (Azure AD) authentication](../database/authentication-aad-overview.md), single sign-on support <br> <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true">Azure AD server principals (logins)</a> <br>Adheres to compliance standards same as Azure SQL Database <br>[SQL auditing](auditing-configure.md) <br>[Advanced Threat Protection](threat-detection-configure.md) |Azure Resource Manager API for automating service provisioning and scaling <br>Azure portal functionality for manual service provisioning and scaling <br>Data Migration Service
> [!IMPORTANT] > Azure SQL Managed Instance has been certified against a number of compliance standards. For more information, see the [Microsoft Azure Compliance Offerings](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3?command=Download&downloadType=Document&downloadId=44bbae63-bf4d-4e3b-9d3d-c96fb25ec363&tab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb&docTab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb_FAQ_and_White_Papers), where you can find the most current list of SQL Managed Instance compliance certifications, listed under **SQL Database**.
Migration of an encrypted database to SQL Managed Instance is supported via Azur
SQL Managed Instance supports traditional SQL Server database engine logins and logins integrated with Azure AD. Azure AD server principals (logins) (**public preview**) are an Azure cloud version of on-premises database logins that you are using in your on-premises environment. Azure AD server principals (logins) enable you to specify users and groups from your Azure AD tenant as true instance-scoped principals, capable of performing any instance-level operation, including cross-database queries within the same managed instance.
-A new syntax is introduced to create Azure AD server principals (logins), **FROM EXTERNAL PROVIDER**. For more information on the syntax, see <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current">CREATE LOGIN</a>, and review the [Provision an Azure Active Directory administrator for SQL Managed Instance](../database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) article.
+A new syntax is introduced to create Azure AD server principals (logins), **FROM EXTERNAL PROVIDER**. For more information on the syntax, see <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true">CREATE LOGIN</a>, and review the [Provision an Azure Active Directory administrator for SQL Managed Instance](../database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) article.
### Azure Active Directory integration and multi-factor authentication
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
Previously updated : 3/5/2021 Last updated : 3/16/2021
For information about restore statements, see [RESTORE statements](/sql/t-sql/st
### Service broker
-Cross-instance service broker isn't supported:
+Cross-instance service broker message exchange is supported only between Azure SQL Managed Instances:
-- `sys.routes`: As a prerequisite, you must select the address from sys.routes. The address must be LOCAL on every route. See [sys.routes](/sql/relational-databases/system-catalog-views/sys-routes-transact-sql).-- `CREATE ROUTE`: You can't use `CREATE ROUTE` with `ADDRESS` other than `LOCAL`. See [CREATE ROUTE](/sql/t-sql/statements/create-route-transact-sql).-- `ALTER ROUTE`: You can't use `ALTER ROUTE` with `ADDRESS` other than `LOCAL`. See [ALTER ROUTE](/sql/t-sql/statements/alter-route-transact-sql).
+- `CREATE ROUTE`: You can't use `CREATE ROUTE` with `ADDRESS` other than `LOCAL` or DNS name of another SQL Managed Instance.
+- `ALTER ROUTE`: You can't use `ALTER ROUTE` with `ADDRESS` other than `LOCAL` or DNS name of another SQL Managed Instance.
+
+Transport security is supported, dialog security is not:
+- `CREATE REMOTE SERVICE BINDING`is not supported.
Service broker is enabled by default and cannot be disabled. The following ALTER DATABSE options are not supported: - `ENABLE_BROKER`
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
+
+ Title: "Access to Azure SQL Database: Migration guide"
+description: This guide teaches you to migrate your Microsoft Access databases to Azure SQL Database using SQL Server Migration Assistant for Access (SSMA for Access).
+++
+ms.devlang:
+++ Last updated : 03/19/2021++
+# Migration guide: Access to Azure SQL Database
+
+This migration guide teaches you to migrate your Microsoft Access databases to Azure SQL Database using the SQL Server Migration Assistant for Access.
+
+For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+To migrate your Access database to Azure SQL Database, you need:
+
+- to verify your source environment is supported.
+- [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+
+## Pre-migration
+
+After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
++
+### Assess
+
+Create an assessment using [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+
+To create an assessment, follow these steps:
+
+1. Open SQL Server Migration Assistant for Access.
+1. Select **File** and then choose **New Project**. Provide a name for your migration project.
+1. Select **Add Databases** and choose databases to be added to your new project
+1. In **Access Metadata Explorer**, right-click the database and then choose **Create Report**.
+1. Review the sample assessment. For example:
+
+### Convert schema
+
+To convert database objects, follow these steps:
+
+1. Select **Connect to Azure SQL Database** and provide connection details.
+1. Right-click the database in **Access Metadata Explorer** and choose **Convert schema**.
+1. (Optional) To convert an individual object, right-click the object and choose **Convert schema**. An object that has been converted appears bold in the **Access Metadata Explorer**:
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
++
+## Migrate
+
+After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migrating data is a bulk-load operation that moves rows of data into Azure SQL Database in transactions. The number of rows to be loaded into Azure SQL Database in each transaction is configured in the project settings.
+
+To migrate data by using SSMA for Access, follow these steps:
+
+1. If you haven't already, select **Connect to Azure SQL Database** and provide connection details.
+1. Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database.
+1. Use **Access Metadata Explorer** to check boxes next to the items you want to migrate. If you want to migrate the entire database, check the box next to the database.
+1. Right-click the database or object you want to migrate, and choose **Migrate data**.
+ To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box.
+
+## Post-migration
+
+After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+
+### Perform tests
+
+The test approach for database migration consists of performing the following activities:
+
+ 1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+
+ 2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+
+ 3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+
+ 4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+
+For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+
+## Migration assets
+
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| **Title/link** | **Description** |
+| - | -- |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
++
+These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+
+## Next steps
+
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+
+- To learn more about Azure SQL Database see:
+ - [An overview of SQL Database](../../database/sql-database-paas-overview.md)
+ - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
++
+- To learn more about the framework and adoption cycle for Cloud migrations, see
+ - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
+ - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+
+- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Title: "DB2 to SQL Database: Migration guide"
-description: Follow this guide to migrate your DB2 databases to Azure SQL Database.
+description: This guide teaches you to migrate your DB2 databases to Azure SQL Database using SQL Server Migration Assistant for DB2 (SSMA for DB2).
The test approach for database migration consists of the following activities:
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
- > [!NOTE]
- > For assistance developing and running post-migration validation tests, consider the Data Quality Solution available from the partner [QuerySurge](https://www.querysurge.com/company/partners/microsoft).
- ## Leverage advanced features
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
+
+ Title: "MySQL to Azure SQL Database: Migration guide"
+description: This guide teaches you to migrate your MySQL databases to Azure SQL Database using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
+++
+ms.devlang:
+++ Last updated : 03/19/2021++
+# Migration guide: MySQL to Azure SQL Database
+
+This guide teaches you to migrate your MySQL database to Azure SQL Database using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
+
+For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+To migrate your MySQL database to Azure SQL Database, you need:
+
+- to verify your source environment is supported. Currently, MySQL 5.6 and 5.7 is supported.
+- [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/confirmation.aspx?id=54257)
++
+## Pre-migration
+
+After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+
+### Assess
+
+By using [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/confirmation.aspx?id=54257), you can review database objects and data, and assess databases for migration.
+
+To create an assessment, perform the following steps.
+
+1. Open SQL Server Migration Assistant for MySQL.
+1. Select **File** from the menu and then choose **New Project**. Provide the project name, a location to save your project.
+1. Choose **Azure SQL Database** as the migration target.
+1. Choose **Connect to MySQL** and provide connection details to connect your MySQL server.
+1. Right-click the MySQL schema in **MySQL Metadata Explorer** and choose **Create report**. Alternatively, you can select **Create report** from the top-line navigation bar.
+1. Review the HTML report for conversion statistics, as well as errors and warnings. Analyze it to understand conversion issues and resolutions.
+
+ This report can also be accessed from the SSMA projects folder as selected in the first screen. From the example above locate the report.xml file from:
+
+ `drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\`
+
+ and open it in Excel to get an inventory of MySQL objects and the effort required to perform schema conversions.
+
+### Validate data types
+
+Before you perform schema conversion validate the default datatype mappings or change them based on requirements. You could do so either by navigating to the "Tools" menu and choosing "Project Settings" or you can change type mapping for each table by selecting the table in the "MySQL Metadata Explorer".
+
+### Convert schema
+
+To convert the schema, follow these steps:
+
+1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**.
+1. Choose **Connect to Azure SQL Database** from the top-line navigation bar and provide connection details. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
+1. Right-click the schema and choose **Convert schema**.
+1. After the schema is finished converting, compare the converted code to the original code to identify potential problems.
+++
+## Migrate
+
+After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
+
+To publish the schema and migrate the data, follow these steps:
+
+1. Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database.
+1. Right-click the MySQL schema from the **MySQL Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select **Migrate Data** from the top-line navigation.
+1. After migration completes, view the **Data Migration** report:
+1. Validate the migration by reviewing the data and schema on Azure SQL Database by using SQL Server Management Studio (SSMS).
++
+## Post-migration
+
+After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+
+### Perform tests
+
+The test approach for database migration consists of performing the following activities:
+
+1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+
+2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+
+3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+
+4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+
+For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+
+## Migration assets
+
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| Title/link | Description |
+| - | - |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+
+These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+
+## Next steps
+
+- Be sure to check out the [Azure Total Cost of Ownership (TCO) Calculator](https://aka.ms/azure-tco) to help estimate the cost savings you can realize by migrating your workloads to Azure.
+
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+
+- For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+
+For videos, see:
+- [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
+
+ Title: "Oracle to SQL Database: Migration guide"
+description: This guide teaches you to migrate your Oracle schema to Azure SQL Database using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
+++
+ms.devlang:
+++ Last updated : 08/25/2020++
+# Migration guide: Oracle to Azure SQL Database
+
+This guide teaches you to migrate your Oracle schemas to Azure SQL Database using SQL Server Migration Assistant for Oracle.
+
+For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+To migrate your Oracle schema to SQL Database you need:
+
+- To verify your source environment is supported.
+- To download [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- A target [Azure SQL Database](../../database/single-database-create-quickstart.md).
+- The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
+
+
+## Pre-migration
+
+After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
+++
+### Assess
++
+Use the SQL Server Migration Assistant (SSMA) for Oracle to review database objects and data, assess databases for migration, migrate database objects to Azure SQL Database, and then finally migrate data to the database.
++
+To create an assessment, follow these steps:
++
+1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Select **File** and then choose **New Project**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**.
+1. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+
+ For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`
+++
+### Validate data types
+
+Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+
+1. Select **Tools** from the menu.
+1. Select **Project Settings**.
+1. Select the **Type mappings** tab.
+1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata Explorer**.
+
+### Convert schema
+
+To convert the schema, follow these steps:
+
+1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**.
+1. Select **Connect to Azure SQL Database**.
+ 1. Enter connection details to connect your database in Azure SQL Database.
+ 1. Choose your target SQL Database from the drop-down.
+ 1. Select **Connect**.
+
+1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+
+## Migrate
+
+After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
+
+To publish your schema and migrate your data, follow these steps:
+
+1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**.
+1. Migrate the data: Right-click the schema from the **Oracle Metadata Explorer** and choose **Migrate Data**.
+1. Provide connection details for both Oracle and Azure SQL Database.
+1. View the **Data Migration report**.
+1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
++
+Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see:
+
+- [SQL Server Migration Assistant: How to assess and migrate data from non-Microsoft data platforms to SQL Server](https://blogs.msdn.microsoft.com/datamigration/2016/11/16/sql-server-migration-assistant-how-to-assess-and-migrate-databases-from-non-microsoft-data-platforms-to-sql-server/)
+- [Getting Started with SQL Server Integration Services](https://docs.microsoft.com/sql/integration-services/sql-server-integration-services)
+- [SQL Server Integration
++
+## Post-migration
+
+After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+
+The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries, providing you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate our Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog.
+++
+### Perform tests
+
+The test approach for database migration consists of performing the following activities:
+
+1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+
+2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+
+3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+
+4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
++
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+
+> [!NOTE]
+> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
++
+## Migration assets
+
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| **Title/link** | **Description** |
+| - | -- |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server base. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+
+These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+
+## Next steps
+
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+
+- To learn more about Azure SQL Database, see:
+ - [An overview of Azure SQL Database](../../database/sql-database-paas-overview.md)
+ - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/)
++
+- To learn more about the framework and adoption cycle for Cloud migrations, see
+ - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
+ - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+
+- For video content, see:
+ - [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
++
azure-sql Sap Ase To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sap-ase-to-sql-database.md
+
+ Title: "SAP ASE to Azure SQL Database: Migration guide"
+description: This guide teaches you to migrate your SAP ASE databases to Azure SQL Database using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
+++
+ms.devlang:
+++ Last updated : 03/19/2021++
+# Migration guide: SAP ASE to Azure SQL Database
+
+This guide teaches you to migrate your SAP ASE databases to Azure SQL Database using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
+
+For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+To migrate your SAP SE database to Azure SQL Database, you need:
+
+- to verify your source environment is supported.
+- [SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256).
+
+## Pre-migration
+
+After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+
+### Assess
+
+Use [SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256) to review database objects and data, assess databases for migration, migrate Sybase database objects to Azure SQL Database, and then migrate data to Azure SQL Database. To learn more, see [SQL Server Migration Assistant for Sybase (SybaseToSQL)](/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql).
+
+To create an assessment, follow these steps:
+
+1. Open **SSMA for Sybase**.
+1. Select **File** and then choose **New Project**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**.
+1. Enter in values for SAP connection details on the **Connect to Sybase** dialog box.
+1. Right-click the SAP database you want to migrate, and then choose **Create report**. This generates an HTML report.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of DB2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+
+ For example: `drive:\<username>\Documents\SSMAProjects\MyDB2Migration\report\report_<date>`.
++
+### Validate type mappings
+
+Before you perform schema conversion validate the default datatype mappings or change them based on requirements. You could do so either by navigating to the **Tools** menu and choosing **Project Settings** or you can change type mapping for each table by selecting the table in the **SAP ASE Metadata Explorer**.
++
+### Convert schema
+
+To convert the schema, follow these steps:
+
+1. (Optional) To convert dynamic or ad-hoc queries, right-click the node, and choose **Add Statement**.
+1. Select **Connect to Azure SQL Database** in the top-line navigation bar and provide Azure SQL Database details. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
+1. Right-click the SAP ASE schema in **Sybase Metadata Explorer** and choose **Convert schema**. Alternatively, you can select **Convert schema** from the top-line navigation bar.
+1. Compare and review the structure of the schema to identify potential problems.
+
+ After schema conversion you can save this project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to Azure SQL Database.
+
+To learn more, see [Convert schema](/sql/ssma/sybase/converting-sybase-ase-database-objects-sybasetosql)
++
+## Migrate
+
+After you have the necessary prerequisites in place and have completed the tasks associated with the **Pre-migration** stage, you are ready to perform the schema and data migration.
+
+To publish the schema and migrate the data, follow these steps:
+
+1. Right-click the database in **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the SAP ASE schema to the Azure SQL Database instance.
+1. Right-click the SAP ASE schema in **SAP ASE Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar.
+1. After migration completes, view the **Data Migration Report**:
+1. Validate the migration by reviewing the data and schema on the Azure SQL Database instance by using Azure SQL Database Management Studio (SSMS).
++
+## Post-migration
+
+After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+
+### Perform tests
+
+The test approach for database migration consists of performing the following activities:
+
+1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+
+2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+
+3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+
+4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+
+> [!NOTE]
+> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
++
+## Next steps
+
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+
+- To learn more about Azure SQL Database see:
+ - [An overview of SQL Database](../../database/sql-database-paas-overview.md)
+ - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
++
+- To learn more about the framework and adoption cycle for Cloud migrations, see
+ - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
+ - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+
+- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
Previously updated : 11/06/2020 Last updated : 03/19/2021 # Migration guide: SQL Server to SQL Database [!INCLUDE[appliesto--sqldb](../../includes/appliesto-sqldb.md)]
The test approach for database migration consists of the following activities:
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
- > [!NOTE]
- > For assistance developing and running post-migration validation tests, consider the Data Quality Solution available from the partner [QuerySurge](https://www.querysurge.com/company/partners/microsoft).
- ## Leverage advanced features
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
Title: "DB2 to SQL Managed Instance: Migration guide"
-description: Follow this guide to migrate your DB2 databases to Azure SQL Managed Instance.
+description: This guide teaches you to migrate your DB2 databases to Azure SQL Managed Instance using SQL Server Migration Assistant for DB2.
The test approach for database migration consists of the following activities:
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
- > [!NOTE]
- > For assistance developing and running post-migration validation tests, consider the Data Quality Solution available from the partner [QuerySurge](https://www.querysurge.com/company/partners/microsoft).
- ## Leverage advanced features
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
+
+ Title: "Oracle to SQL Managed Instance: Migration guide"
+description: This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance using SQL Server Migration Assistant for Oracle.
+++
+ms.devlang:
++++ Last updated : 11/06/2020+
+# Migration guide: Oracle to Azure SQL Managed Instance
+
+This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance using SQL Server Migration Assistant for Oracle.
+
+For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+To migrate your Oracle schema to SQL Managed Instance you need:
+
+- To verify your source environment is supported.
+- To download [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- A target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md).
+- The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
+
+
+## Pre-migration
+
+After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
+++
+### Assess
+
+Use the SQL Server Migration Assistant (SSMA) for Oracle to review database objects and data, assess databases for migration, migrate database objects to Azure SQL Managed Instance, and then finally migrate data to the database.
+
+To create an assessment, follow these steps:
++
+1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Select **File** and then choose **New Project**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**.
+1. Enter in values for the Oracle connection details on the Connect to **Connect to Oracle** dialog box.
+1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+
+ For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`
++
+### Validate data types
+
+Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+
+1. Select **Tools** from the menu.
+1. Select **Project Settings**.
+1. Select the **Type mappings** tab.
+1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata Explorer**.
+
+### Convert schema
+
+To convert the schema, follow these steps:
+
+1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**.
+1. Select **Connect to Azure SQL Managed Instance**.
+ 1. Enter connection details to connect your database in Azure SQL Managed Instance.
+ 1. Choose your target database from the drop-down.
+ 1. Select **Connect**.
+1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+
+## Migrate
+
+After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
+
+To publish your schema and migrate your data, follow these steps:
+
+1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Managed Instance Metadata Explorer** and choose **Synchronize with Database**.
+1. Migrate the data: Right-click the schema from the **Oracle Metadata Explorer** and choose **Migrate Data**.
+1. Provide connection details for both Oracle and Azure SQL Managed Instance.
+1. View the **Data Migration report**.
+1. Connect to your Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+
+Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see:
+
+- [SQL Server Migration Assistant: How to assess and migrate data from non-Microsoft data platforms to SQL Server](https://blogs.msdn.microsoft.com/datamigration/2016/11/16/sql-server-migration-assistant-how-to-assess-and-migrate-databases-from-non-microsoft-data-platforms-to-sql-server/)
+- [Getting Started with SQL Server Integration Services](https://docs.microsoft.com/sql/integration-services/sql-server-integration-services)
+- [SQL Server Integration
+
+
++
+## Post-migration
+
+After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+
+The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries, providing you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate our Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog.
+
+### Perform tests
+
+The test approach for database migration consists of performing the following activities:
+
+1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+
+2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+
+3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+
+4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+
+> [!NOTE]
+> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
++
+## Migration assets
+
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| **Title/link** | **Description** |
+| - | -- |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Serverbase. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+
+These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+
+## Next steps
+
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+
+- To learn more about Azure SQL Managed Instance, see:
+ - [An overview of Azure SQL Managed Instance](../../database/sql-database-paas-overview.md)
+ - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/)
++
+- To learn more about the framework and adoption cycle for Cloud migrations, see
+ - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
+ - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) s)
+
+- For video content, see:
+ - [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
Title: "SQL Server to SQL Managed Instance: Migration guide"
-description: Follow this guide to migrate your SQL Server databases to Azure SQL Managed Instance.
+description: This guide teaches you to migrate your SQL Server databases to Azure SQL Managed Instance.
The test approach for database migration consists of the following activities:
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
- > [!NOTE]
- > For assistance developing and running post-migration validation tests, consider the Data Quality Solution available from the partner [QuerySurge](https://www.querysurge.com/company/partners/microsoft).
-- ## Leverage advanced features
azure-sql Sql Server To Managed Instance Performance Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-performance-baseline.md
Title: "SQL Server to SQL Managed Instance: Performance analysis" description: Learn to create and compare a performance baseline when migrating your SQL Server databases to Azure SQL Managed Instance. --++ ms.devlang:
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
Title: "DB2 to SQL Server on Azure VMs (Migration guide)"
-description: Follow this guide to migrate your DB2 server to SQL Server on Azure VMs.
+ Title: "DB2 to SQL Server on Azure VMs: Migration guide"
+description: This guide teaches you to migrate your DB2 database to SQL Server on Azure VMs using SQL Server Migration Assistant for DB2.
The test approach for database migration consists of the following activities:
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
- > [!NOTE]
- > For assistance developing and running post-migration validation tests, consider the Data Quality Solution available from the partner [QuerySurge](https://www.querysurge.com/company/partners/microsoft).
## Migration assets
For a matrix of the Microsoft and third-party services and tools that are availa
For other migration guides, see [Database Migration](https://datamigration.microsoft.com/). For video content, see:-- [How to use the Database Migration Guide](https://azure.microsoft.com/resources/videos/how-to-use-the-azure-database-migration-guide/) - [Overview of the migration journey](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
+
+ Title: "Oracle to SQL Server on Azure VM: Migration guide"
+description: This guide teaches you to migrate your Oracle schemas to SQL Server on Azure VMs using SQL Server Migration Assistant for Oracle.
+++
+ms.devlang:
++++ Last updated : 11/06/2020+
+# Migration guide: Oracle to SQL Server on Azure VM
+
+This guide teaches you to migrate your Oracle schemas to SQL Server on Azure VM using SQL Server Migration Assistant for Oracle.
+
+For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+To migrate your Oracle schema to SQL Server on Azure VM, you need:
+
+- To verify your source environment is supported.
+- To download [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- A target [SQL Server VM](../../virtual-machines/windows/sql-vm-create-portal-quickstart.md).
+- The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
+
+## Pre-migration
+
+As you prepare for migrating to the cloud, verify that your source environment is supported and that you have addressed any prerequisites. This will help to ensure an efficient and successful migration.
+
+This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
+
+### Discover
+
+Use the [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883) to identify existing data sources and details about the features that are being used by your business to get a better understanding of and plan for the migration. This process involves scanning the network to identify all your organization's Oracle instances together with the version and features in use.
+
+To use the MAP Toolkit to perform an inventory scan, follow these steps:
+
+1. Open the [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883).
+1. Select **Create/Select database**.
+1. Select **Create an inventory database**, enter a name for the new inventory database you're creating, provide a brief description, and then select **OK**.
+1. Select **Collect inventory data** to open the **Inventory and Assessment Wizard**.
+1. In the **Inventory and Assessment Wizard**, choose **Oracle** and then select **Next**.
+1. Choose the computer search option that best suits your business needs and environment, and then select **Next**:
+1. Either enter credentials or create new credentials for the systems that you want to explore, and then select **Next**.
+1. Set the order of the credentials, and then select **Next**.
+1. Specify the credentials for each computer you want to discover. You can use unique credentials for every computer/machine, or you can choose to use the **All Computer Credentials** list.
+1. Verify your selection summary, and then select **Finish**.
+1. After the scan completes, view the **Data Collection** summary report. The scan take a few minutes, and depends on the number of databases. Select **Close** when finished.
+1. Select **Options** to generate a report about the Oracle Assessment and database details. Select both options (one by one) to generate the report.
++
+### Assess
+
+After identifying the data sources, use the [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258) to assess the Oracle instance(s) migrating to the SQL Server VM so that you understand the gaps between the two. Using the migration assistant, you can review database objects and data, assess databases for migration, migrate database objects to SQL Server, and then migrate data to SQL Server.
+
+To create an assessment, follow these steps:
+
+1. Open the [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Select **File** and then choose **New Project**.
+1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**.
+1. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+
+1. In **Oracle Metadata Explorer**, select the Oracle schema, and then select **Create Report** to generate an HTML report with conversion statistics and error/warnings, if any..
+1. Review the HTML report for conversion statistics, as well as errors and warnings. Analyze it to understand conversion issues and resolutions.
+
+ This report can also be accessed from the SSMA projects folder as selected in the first screen. From the example above locate the report.xml file from:
+
+ `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2016_11_12T02_47_55\`
+
+ and then open it in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions.
++
+### Validate data types
+
+Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+
+1. Select **Tools** from the menu.
+1. Select **Project Settings**.
+1. Select the **Type mappings** tab.
+1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata explorer**.
+++
+### Convert schema
+
+To convert the schema, follow these steps:
+
+1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**.
+1. Choose **Connect to SQL Server** from the top-line navigation bar and provide connection details for your SQL Server on Azure VM. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
+1. Right-click the schema and choose **Convert Schema**.
+1. After the schema is finished converting, compare and review the structure of the schema to identify potential problems.
+
+ You can save the project locally for an offline schema remediation exercise. You can do so by selecting **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Server.
++
+## Migrate
+
+After you have the necessary prerequisites in place and have completed the tasks associated with the **Pre-migration** stage, you are ready to perform the schema and data migration. Migration involves two steps ΓÇô publishing the schema and migrating the data.
++
+To publish the schema and migrate the data, follow these steps:
+
+1. Right-click the database from the **SQL Server Metadata Explorer** and choose **Synchronize with Database**. This action publishes the Oracle schema to SQL Server on Azure VM.
+1. Right-click the Oracle schema from the **Oracle Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select Migrate Data from the top-line navigation.
+1. Provide connection details for Oracle and SQL Server on Azure VM at the dialog box.
+1. After migration completes, view the Data Migration report:
+1. Connect to your SQL Server on Azure VM using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) to review data and schema on your SQL Server instance.
++
+In addition to using SSMA, you can also use SQL Server Integration Services (SSIS) to migrate the data. To learn more, see:
+- The article [Getting Started with SQL Server Integration Services](https://docs.microsoft.com//sql/integration-services/sql-server-integration-services).
+- The white paper [SQL Server Integration
+++
+## Post-migration
+
+After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+
+The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries, providing you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate our Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog.
+
+### Perform tests
+
+The test approach for database migration consists of performing the following activities:
+
+1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+
+2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+
+3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+
+4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+
+> [!Note]
+> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
++
+## Migration assets
+
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| **Title/link** | **Description** |
+| - | -- |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Serverbase. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+
+These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+
+## Next steps
+
+- To check the availability of services applicable to SQL Server see the [Azure Global infrastructure center](https://azure.microsoft.com/global-infrastructure/services/?regions=all&amp;products=synapse-analytics,virtual-machines,sql-database)
+
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration.](../../../dms/dms-tools-matrix.md)
+
+- To learn more about Azure SQL see:
+ - [Deployment options](../../azure-sql-iaas-vs-paas-what-is-overview.md)
+ - [SQL Server on Azure VMs](../../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md)
+ - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
++
+- To learn more about the framework and adoption cycle for Cloud migrations, see
+ - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
+ - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+
+- For information about licensing, see
+ - [Bring your own license with the Azure Hybrid Benefit](../../virtual-machines/windows/licensing-model-azure-hybrid-benefit-ahb-change.md)
+ - [Get free extended support for SQL Server 2008 and SQL Server 2008 R2](../../virtual-machines/windows/sql-server-2008-extend-end-of-support.md)
++
+- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+
azure-sql Sql Server To Sql On Azure Vm Individual Databases Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide.md
Title: SQL Server to SQL Server on Azure VMs (Migration guide)
-description: Follow this guide to migrate your individual SQL Server databases to SQL Server on Azure Virtual Machines (VMs).
+ Title: "SQL Server to SQL Server on Azure VMs: Migration guide"
+description: This guide teaches you to migrate your individual SQL Server databases to SQL Server on Azure VMs.
Previously updated : 11/06/2020 Last updated : 03/19/2021 # Migration guide: SQL Server to SQL Server on Azure VMs
The test approach for database migration consists of performing the following ac
> [!TIP] > Use the [Database Experimentation Assistant (DEA)](/sql/dea/database-experimentation-assistant-overview) to assist with evaluating the target SQL Server performance.
->
+ ### Optimize
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution description: Learn about the platform updates to Azure VMware Solution. Previously updated : 03/05/2021 Last updated : 03/16/2021 # Platform updates for Azure VMware Solution
+Important updates to Azure VMware Solution will be applied starting in March 2021. You'll receive notification through Azure Service Health that includes the timeline of the maintenance. In this article, you'll learn what to expect during this maintenance operation and changes to your private cloud.
-## March 4, 2021
+## March 15, 2021
+
+- Azure VMware Solution service will be performing maintenance work to update vCenter server in your private cloud to vCenter Server 6.7 Update 3l version through March 19, 2021.
-Important updates to Azure VMware Solutions will be applied starting in March 2021. You'll receive notification through Azure Service Health that includes the timeline of the maintenance. In this article, you learn what to expect during this maintenance operation and changes to your private cloud.
+- During this time, VMware vCenter will be unavailable, and you won't be able to manage VMs (stop, start, create, delete). VMware High Availability (HA) will continue to operate to provide protection for existing VMs. Private cloud scaling (adding/removing servers and clusters) will also be unavailable.
+
+For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html).
+
+## March 4, 2021
- Azure VMware Solutions will apply patches to ESXi in existing private clouds to [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) through March 15, 2021.
Important updates to Azure VMware Solutions will be applied starting in March 20
>[!NOTE] >This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter and clear automatically as the maintenance progresses. - ## Post update Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
azure-vmware Concepts Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-role-based-access-control.md
Title: Concepts - vSphere role-based access control (vSphere RBAC) description: Learn about the key capabilities of vSphere role-based access control for Azure VMware Solution Previously updated : 02/02/2021 Last updated : 03/16/2021 # vSphere role-based access control (vSphere RBAC) for Azure VMware Solution
Last updated 02/02/2021
In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and assigned to the built-in CloudAdmin role. The local cloudadmin user is used to set up users in AD. In general, the CloudAdmin role creates and manages workloads in your private cloud. In Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions. > [!NOTE]
-> Azure VMware Solution currently doesn't offer custom roles on vCenter or the Azure VMware Solution portal.
+> Azure VMware Solution offers custom roles on vCenter but currently does not offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter](#create-custom-roles-on-vcenter) section later in this article.
-In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter administrator@vsphere.local account. They can also have additional Active Directory (AD) users/groups assigned.
+In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter administrator@vsphere.local account. They can also have more Active Directory (AD) users/groups assigned.
In an Azure VMware Solution deployment, the administrator doesn't have access to the administrator user account. But they can assign AD users and groups to the CloudAdmin role on vCenter.
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
| Privilege | Description | | | -- | | **Alarms** | Acknowledge alarm<br />Create alarm<br />Disable alarm action<br />Modify alarm<br />Remove alarm<br />Set alarm status |
-| **Permissions** | Modify permissions |
+| **Permissions** | Modify permissions<br />Modify role |
| **Content Library** | Add library item<br />Create a subscription for a published library<br />Create local library<br />Create subscribed library<br />Delete library item<br />Delete local library<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings | | **Cryptographic operations** | Direct access | | **Datastore** | Allocate space<br />Browse datastore<br />Configure datastore<br />Low-level file operations<br />Remove files<br />Update virtual machine metadata |
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
| **Profile** | Profile driven storage view | | **Storage view** | View | | **vApp** | Add virtual machine<br />Assign resource pool<br />Assign vApp<br />Clone<br />Create<br />Delete<br />Export<br />Import<br />Move<br />Power off<br />Power on<br />Rename<br />Suspend<br />Unregister<br />View OVF environment<br />vApp application configuration<br />vApp instance configuration<br />vApp managedBy configuration<br />vApp resource configuration |
-| **Virtual machine** | Change Configuration<br />&#160;&#160;&#160;&#160;Acquire disk lease<br />&#160;&#160;&#160;&#160;Add existing disk<br />&#160;&#160;&#160;&#160;Add new disk<br />&#160;&#160;&#160;&#160;Add or remove device<br />&#160;&#160;&#160;&#160;Advanced configuration<br />&#160;&#160;&#160;&#160;Change CPU count<br />&#160;&#160;&#160;&#160;Change memory<br />&#160;&#160;&#160;&#160;Change settings<br />&#160;&#160;&#160;&#160;Change swapfile placement<br />&#160;&#160;&#160;&#160;Change resource<br />&#160;&#160;&#160;&#160;Configure host USB device<br />&#160;&#160;&#160;&#160;Configure raw device<br />&#160;&#160;&#160;&#160;Configure managedBy<br />&#160;&#160;&#160;&#160;Display connection settings<br />&#160;&#160;&#160;&#160;Extend virtual disk<br />&#160;&#160;&#160;&#160;Modify device settings<br />&#160;&#160;&#160;&#160;Query fault tolerance compatibility<br />&#160;&#160;&#160;&#160;Query unowned files<br />&#160;&#160;&#160;&#160;Reload from paths<br />&#160;&#160;&#160;&#160;Remove disk<br />&#160;&#160;&#160;&#160;Rename<br />&#160;&#160;&#160;&#160;Reset guest information<br />&#160;&#160;&#160;&#160;Set annotation<br />&#160;&#160;&#160;&#160;Toggle disk change tracking<br />&#160;&#160;&#160;&#160;Toggle fork parent<br />&#160;&#160;&#160;&#160;Upgrade virtual machine compatibility<br />Edit inventory<br />&#160;&#160;&#160;&#160;Create from existing<br />&#160;&#160;&#160;&#160;Create new<br />&#160;&#160;&#160;&#160;Move<br />&#160;&#160;&#160;&#160;Register<br />&#160;&#160;&#160;&#160;Remove<br />&#160;&#160;&#160;&#160;Unregister<br />Guest operations<br />&#160;&#160;&#160;&#160;Guest operation alias modification<br />&#160;&#160;&#160;&#160;Guest operation alias query<br />&#160;&#160;&#160;&#160;Guest operation modifications<br />&#160;&#160;&#160;&#160;Guest operation program execution<br />&#160;&#160;&#160;&#160;Guest operation queries<br />Interaction<br />&#160;&#160;&#160;&#160;Answer question<br />&#160;&#160;&#160;&#160;Back up operation on virtual machine<br />&#160;&#160;&#160;&#160;Configure CD media<br />&#160;&#160;&#160;&#160;Configure floppy media<br />&#160;&#160;&#160;&#160;Connect devices<br />&#160;&#160;&#160;&#160;Console interaction<br />&#160;&#160;&#160;&#160;Create screenshot<br />&#160;&#160;&#160;&#160;Defragment all disks<br />&#160;&#160;&#160;&#160;Drag and drop<br />&#160;&#160;&#160;&#160;Guest operating system management by VIX API<br />&#160;&#160;&#160;&#160;Inject USB HID scan codes<br />&#160;&#160;&#160;&#160;Install VMware tools<br />&#160;&#160;&#160;&#160;Pause or Unpause<br />&#160;&#160;&#160;&#160;Perform wipe or shrink operations<br />&#160;&#160;&#160;&#160;Power off<br />&#160;&#160;&#160;&#160;Power on<br />&#160;&#160;&#160;&#160;Record session on virtual machine<br />&#160;&#160;&#160;&#160;Replay session on virtual machine<br />&#160;&#160;&#160;&#160;Suspend<br />&#160;&#160;&#160;&#160;Suspend fault tolerance<br />&#160;&#160;&#160;&#160;Test failover<br />&#160;&#160;&#160;&#160;Test restart secondary VM<br />&#160;&#160;&#160;&#160;Turn off fault tolerance<br />&#160;&#160;&#160;&#160;Turn on fault tolerance<br />Provisioning<br />&#160;&#160;&#160;&#160;Allow disk access<br />&#160;&#160;&#160;&#160;Allow file access<br />&#160;&#160;&#160;&#160;Allow read-only disk access<br />&#160;&#160;&#160;&#160;Allow virtual machine download<br />&#160;&#160;&#160;&#160;Clone template<br />&#160;&#160;&#160;&#160;Clone virtual machine<br />&#160;&#160;&#160;&#160;Create template from virtual machine<br />&#160;&#160;&#160;&#160;Customize guest<br />&#160;&#160;&#160;&#160;Deploy template<br />&#160;&#160;&#160;&#160;Mark as template<br />&#160;&#160;&#160;&#160;Modify customization specification<br />&#160;&#160;&#160;&#160;Promote disks<br />&#160;&#160;&#160;&#160;Read customization specifications<br />Service configuration<br />&#160;&#160;&#160;&#160;Allow notifications<br />&#160;&#160;&#160;&#160;Allow polling of global event notifications<br />&#160;&#160;&#160;&#160;Manage service configuration<br />&#160;&#160;&#160;&#160;Modify service configuration<br />&#160;&#160;&#160;&#160;Query service configurations<br />&#160;&#160;&#160;&#160;Read service configuration<br />Snapshot management<br />&#160;&#160;&#160;&#160;Create snapshot<br />&#160;&#160;&#160;&#160;Remove snapshot<br />&#160;&#160;&#160;&#160;Rename snapshot<br />&#160;&#160;&#160;&#160;Revert snapshot<br />vSphere Replication<br />&#160;&#160;&#160;&#160;Configure replication<br />&#160;&#160;&#160;&#160;Manage replication<br />&#160;&#160;&#160;&#160;Monitor replication |
+| **Virtual machine** | Change Configuration<br />&#160;&#160;&#160;&#160;Acquire disk lease<br />&#160;&#160;&#160;&#160;Add existing disk<br />&#160;&#160;&#160;&#160;Add new disk<br />&#160;&#160;&#160;&#160;Add or remove device<br />&#160;&#160;&#160;&#160;Advanced configuration<br />&#160;&#160;&#160;&#160;Change CPU count<br />&#160;&#160;&#160;&#160;Change memory<br />&#160;&#160;&#160;&#160;Change settings<br />&#160;&#160;&#160;&#160;Change swapfile placement<br />&#160;&#160;&#160;&#160;Change resource<br />&#160;&#160;&#160;&#160;Configure host USB device<br />&#160;&#160;&#160;&#160;Configure raw device<br />&#160;&#160;&#160;&#160;Configure managedBy<br />&#160;&#160;&#160;&#160;Display connection settings<br />&#160;&#160;&#160;&#160;Extend virtual disk<br />&#160;&#160;&#160;&#160;Modify device settings<br />&#160;&#160;&#160;&#160;Query fault tolerance compatibility<br />&#160;&#160;&#160;&#160;Query unowned files<br />&#160;&#160;&#160;&#160;Reload from paths<br />&#160;&#160;&#160;&#160;Remove disk<br />&#160;&#160;&#160;&#160;Rename<br />&#160;&#160;&#160;&#160;Reset guest information<br />&#160;&#160;&#160;&#160;Set annotation<br />&#160;&#160;&#160;&#160;Toggle disk change tracking<br />&#160;&#160;&#160;&#160;Toggle fork parent<br />&#160;&#160;&#160;&#160;Upgrade virtual machine compatibility<br />Edit inventory<br />&#160;&#160;&#160;&#160;Create from existing<br />&#160;&#160;&#160;&#160;Create new<br />&#160;&#160;&#160;&#160;Move<br />&#160;&#160;&#160;&#160;Register<br />&#160;&#160;&#160;&#160;Remove<br />&#160;&#160;&#160;&#160;Unregister<br />Guest operations<br />&#160;&#160;&#160;&#160;Guest operation alias modification<br />&#160;&#160;&#160;&#160;Guest operation alias query<br />&#160;&#160;&#160;&#160;Guest operation modifications<br />&#160;&#160;&#160;&#160;Guest operation program execution<br />&#160;&#160;&#160;&#160;Guest operation queries<br />Interaction<br />&#160;&#160;&#160;&#160;Answer question<br />&#160;&#160;&#160;&#160;Back up operation on virtual machine<br />&#160;&#160;&#160;&#160;Configure CD media<br />&#160;&#160;&#160;&#160;Configure floppy media<br />&#160;&#160;&#160;&#160;Connect devices<br />&#160;&#160;&#160;&#160;Console interaction<br />&#160;&#160;&#160;&#160;Create screenshot<br />&#160;&#160;&#160;&#160;Defragment all disks<br />&#160;&#160;&#160;&#160;Drag and drop<br />&#160;&#160;&#160;&#160;Guest operating system management by VIX API<br />&#160;&#160;&#160;&#160;Inject USB HID scan codes<br />&#160;&#160;&#160;&#160;Install VMware tools<br />&#160;&#160;&#160;&#160;Pause or Unpause<br />&#160;&#160;&#160;&#160;Wipe or shrink operations<br />&#160;&#160;&#160;&#160;Power off<br />&#160;&#160;&#160;&#160;Power on<br />&#160;&#160;&#160;&#160;Record session on virtual machine<br />&#160;&#160;&#160;&#160;Replay session on virtual machine<br />&#160;&#160;&#160;&#160;Suspend<br />&#160;&#160;&#160;&#160;Suspend fault tolerance<br />&#160;&#160;&#160;&#160;Test failover<br />&#160;&#160;&#160;&#160;Test restart secondary VM<br />&#160;&#160;&#160;&#160;Turn off fault tolerance<br />&#160;&#160;&#160;&#160;Turn on fault tolerance<br />Provisioning<br />&#160;&#160;&#160;&#160;Allow disk access<br />&#160;&#160;&#160;&#160;Allow file access<br />&#160;&#160;&#160;&#160;Allow read-only disk access<br />&#160;&#160;&#160;&#160;Allow virtual machine download<br />&#160;&#160;&#160;&#160;Clone template<br />&#160;&#160;&#160;&#160;Clone virtual machine<br />&#160;&#160;&#160;&#160;Create template from virtual machine<br />&#160;&#160;&#160;&#160;Customize guest<br />&#160;&#160;&#160;&#160;Deploy template<br />&#160;&#160;&#160;&#160;Mark as template<br />&#160;&#160;&#160;&#160;Modify customization specification<br />&#160;&#160;&#160;&#160;Promote disks<br />&#160;&#160;&#160;&#160;Read customization specifications<br />Service configuration<br />&#160;&#160;&#160;&#160;Allow notifications<br />&#160;&#160;&#160;&#160;Allow polling of global event notifications<br />&#160;&#160;&#160;&#160;Manage service configuration<br />&#160;&#160;&#160;&#160;Modify service configuration<br />&#160;&#160;&#160;&#160;Query service configurations<br />&#160;&#160;&#160;&#160;Read service configuration<br />Snapshot management<br />&#160;&#160;&#160;&#160;Create snapshot<br />&#160;&#160;&#160;&#160;Remove snapshot<br />&#160;&#160;&#160;&#160;Rename snapshot<br />&#160;&#160;&#160;&#160;Revert snapshot<br />vSphere Replication<br />&#160;&#160;&#160;&#160;Configure replication<br />&#160;&#160;&#160;&#160;Manage replication<br />&#160;&#160;&#160;&#160;Monitor replication |
| **vService** | Create dependency<br />Destroy dependency<br />Reconfigure dependency configuration<br />Update dependency |
+## Create custom roles on vCenter
+Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
+
+The CloudAdmin role can create, modify, or delete custom roles that have privileges lesser than or equal to their current role. You may be able to create roles that have privileges greater than CloudAdmin but you will not be able to assign the role to any users or groups or delete the role.
+
+To prevent the creation of roles that can't be assigned or deleted, Azure VMware Solution recommends cloning the CloudAdmin role as the basis for creating new custom roles.
+
+### Create a custom role
+1. Sign into vCenter with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
+2. Navigate to the **Roles** configuration section and select **Menu** > **Administration** > **Access Control** > **Roles**.
+3. Select the **CloudAdmin** role and select the **Clone role action** icon.
+
+ > [!NOTE]
+ > Do not clone the **Administrator** role. This role cannot be used and the custom role created cannot be deleted by cloudadmin\@vsphere.local.
+
+4. Provide the name you want for the cloned role.
+5. Add or remove privileges for the role and select **OK**. The cloned role should now be visible in the **Roles** list.
++
+### Use a custom role
+
+1. Navigate to the object that requires the added permission. For example, to apply the permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**
+1. Right-click the object and select **Add Permission**.
+1. In the **Add Permission** window, select the Identity Source in the **User** drop-down where the group or user can be found.
+1. Search for the user or group after selecting the Identity Source under the **User** section.
+1. Select the role that will be applied for the user or group.
+1. Check the **Propagate to children** if needed, and select **OK**.
+ The added permission displays in the **Permissions** section for the object.
## Next steps
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
Now that you've covered using Application Gateway to protect a web app running o
- [Configuring Azure Application Gateway for different scenarios](../application-gateway/configuration-overview.md). - [Deploying Traffic Manager to balance Azure VMware Solution workloads](deploy-traffic-manager-balance-workloads.md). - [Integrating Azure NetApp Files with Azure VMware Solution-based workloads](netapp-files-with-azure-vmware-solution.md).-- [Protecting Azure resources in virtual networks](../ddos-protection/ddos-protection-overview.md)
+- [Protecting Azure resources in virtual networks](../ddos-protection/ddos-protection-overview.md).
azure-vmware Reset Vsphere Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reset-vsphere-credentials.md
+
+ Title: Reset vSphere credentials for Azure VMware Solution
+description: Learn how to reset vSphere credentials for your Azure VMware Solution private cloud and ensure the HCX connector has the latest vSphere credentials.
+ Last updated : 03/16/2021++
+# Reset vSphere credentials for Azure VMware Solution
+
+In this article, we'll walk through the steps to reset the vSphere credentials for your Azure VMware Solution private cloud. This will allow you to ensure the HCX connector has the latest vSphere credentials.
+
+## Reset your vSphere credentials
+
+ First let's reset your vSphere credentials. Your vCenter CloudAdmin and NSX-T admin credentials donΓÇÖt expire; however, you can follow these steps to generate new passwords for these accounts.
+
+> [!NOTE]
+> If you use your CloudAdmin credentials for connected services like HCX, vCenter Orchestrator, vCloud Director, or vRealize, your connections will stop working once you update your password. These services should be stopped before initiating the password rotation. Failure to do so may result in temporary locks on your vCenter CloudAdmin and NSX-T admin accounts, as these services will continuously call using your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](https://docs.microsoft.com/azure/azure-vmware/concepts-identity).
+
+1. In your Azure VMware Solutions portal, open a command line.
+
+2. Run the following command to update your vCenter CloudAdmin password. You will need to replace {SubscriptionID}, {ResourceGroup}, and {PrivateCloudName} with the actual values of the private cloud that the CloudAdmin account belongs to.
+
+```
+az resource invoke-action --action rotateVcenterPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
+```
+
+3. Run the following command to update your NSX-T admin password. You will need to replace {SubscriptionID}, {ResourceGroup}, and {PrivateCloudName} with the actual values of the private cloud that the NSX-T admin account belongs to.
+
+```
+az resource invoke-action --action rotateNSXTPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
+```
+
+## Ensure the HCX connector has your latest vSphere credentials
+
+Now that you've reset your credentials, follow these steps to ensure the HCX connector has your updated credentials.
+
+1. Once your password is changed, go to the on-premises HCX connector web interface using https://{ip of the HCX connector appliance}:443. Be sure to use port 443. Log in using your new credentials.
+
+2. On the VMware HCX Dashboard, select **Site Pairing**.
+
+ :::image type="content" source="media/reset-vsphere-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
+
+3. Select the correct connection to AVS (if there is more than one) and select **Edit Connection**.
+
+4. Provide the new vSphere credentials and select **Edit**, which saves the credentials. Save should show successful.
+
+## Next steps
+
+Now that you've covered resetting vSphere credentials for Azure VMware Solution, you may want to learn about:
+
+- [Configuring NSX network components in Azure VMware Solution](configure-nsx-network-components-azure-portal.md).
+- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [Deploying disaster recovery of virtual machines using Azure VMware Solution](disaster-recovery-for-virtual-machines.md).
batch Batch Pool Node Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-node-error-checking.md
Title: Check for pool and node errors
description: This article covers the background operations that can occur, along with errors to check for and how to avoid them when creating pools and nodes. Previously updated : 02/03/2020 Last updated : 03/15/2021
When you delete a pool that contains nodes, first Batch deletes the nodes. This
Batch sets the [pool state](/rest/api/batchservice/pool/get#poolstate) to **deleting** during the deletion process. The calling application can detect if the pool deletion is taking too long by using the **state** and **stateTransitionTime** properties.
+If the pool is taking longer than expected, Batch will retry periodically until the pool can be successfully deleted. In some cases, the delay is due to an Azure service outage or other temporary issues. Other factors that can prevent a pool from successfully being deleted may require you to take actions to correct the issue. These factors include the following:
+
+- Resource locks have been placed on Batch-created resources, or on network resources used by Batch.
+- Resources that you created have a dependency on a Batch-created resource. For instance, if you [create a pool in a virtual network](batch-virtual-network.md), Batch creates a network security group (NSG), a public IP address, and a load balancer. If you use these resources outside of the pool, the pool can't be deleted until that dependency is removed.
+- The Microsoft.Batch resource provider was unregistered from the subscription that contains your pool.
+- "Microsoft Azure Batch" no longer has the [Contributor or Owner role](batch-account-create-portal.md#allow-azure-batch-to-access-the-subscription-one-time-operation) to the subscription that contains your pool (for user subscription mode Batch accounts).
+ ## Node errors Even when Batch successfully allocates nodes in a pool, various issues can cause some of the nodes to be unhealthy and unable to run tasks. These nodes still incur charges, so it's important to detect problems to avoid paying for nodes that can't be used. In addition to common node errors, knowing the current [job state](/rest/api/batchservice/job/get#jobstate) is useful for troubleshooting.
If Batch can determine the cause, the node [errors](/rest/api/batchservice/compu
Additional examples of causes for **unusable** nodes include: - A custom VM image is invalid. For example, an image that's not properly prepared.- - A VM is moved because of an infrastructure failure or a low-level upgrade. Batch recovers the node.- - A VM image has been deployed on hardware that doesn't support it. For example, trying to run a CentOS HPC image on a [Standard_D1_v2](../virtual-machines/dv2-dsv2-series.md) VM.- - The VMs are in an [Azure virtual network](batch-virtual-network.md), and traffic has been blocked to key ports.- - The VMs are in a virtual network, but outbound traffic to Azure storage is blocked.- - The VMs are in a virtual network with a customer DNS configuration and the DNS server cannot resolve Azure storage. ### Node agent log files
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 01/13/2021 Last updated : 03/15/2021
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
The automated cleanup for the working directory will be blocked if you run a ser
## Next steps -- [Create an Azure Batch account using the Azure portal](batch-account-create-portal.md). - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.-- Learn about [default Azure Batch quotas, limits, and constraints, and how to request quota increases](batch-quota-limit.md).
+- Learn about [default Azure Batch quotas, limits, and constraints, and how to request quota increases](batch-quota-limit.md).
+- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/nodes-and-pools.md
If you add a certificate to an existing pool, you must reboot its compute nodes
## Next steps - Learn about [jobs and tasks](jobs-and-tasks.md).
+- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
blockchain Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/overview.md
Title: Azure Blockchain Service overview description: Overview of Azure Blockchain Service Previously updated : 01/04/2021 Last updated : 03/15/2021 -+ #Customer intent: As a network operator or developer, I want to understand how I can use Azure Blockchain Service to build and manage consortium blockchain networks on Azure
Azure Blockchain Service is a fully managed ledger service that gives users the
* Built-in consortium management * Develop smart contracts with familiar development tools
-Azure Blockchain Service is designed to support multiple ledger protocols. Currently, it provides support for the Ethereum [Quorum](https://www.goquorum.com/) ledger using the [Istanbul Byzantine Fault Tolerance (IBFT)](https://github.com/jpmorganchase/quorum/wiki/Quorum-Consensus) consensus mechanism.
+Azure Blockchain Service is designed to support multiple ledger protocols. Currently, it provides support for the Ethereum [Quorum](https://www.goquorum.com/) ledger using the [Istanbul Byzantine Fault Tolerance (IBFT)](https://docs.goquorum.consensys.net/en/stable/Concepts/Consensus/IBFT/) consensus mechanism.
These capabilities require almost no administration and all are provided at no additional cost. You can focus on app development and business logic rather than allocating time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver your solutions without having to learn new skills.
cdn Cdn Improve Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-improve-performance.md
The standard and premium CDN tiers provide the same compression functionality, b
> Although it is possible, it is not recommended to apply compression to compressed formats. For example, ZIP, MP3, MP4, or JPG. >
- > [!NOTE]
- > Modifying the default list of MIME types is currently not supported in Azure CDN Standard from Microsoft.
- >
- 5. After making your changes, select **Save**. ### Premium CDN profiles
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
## Deploy a Cloud Service (extended support) > [!NOTE]
- An alternative way of deploying your cloud service (extended support) is via [Azure portal](https://portal.azure.com). You can download the generated ARM template via the portal for your future deployments
+> An alternative way of deploying your cloud service (extended support) is via [Azure portal](https://portal.azure.com). You can download the generated ARM template via the portal for your future deployments
1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template.
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
+## Speech Devices SDK 1.15.0:
+
+- Upgraded to new Microsoft Audio Stack (MAS) with improved beamforming and noise reduction for speech.
+- Reduced the binary size by as much as 70% depending on target.
+- Support for [Azure Percept Audio](https://docs.microsoft.com/azure/azure-percept/overview-azure-percept-audio) with [binary release](https://aka.ms/sdsdk-download-APAudio).
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.15.0. For more information, see its [release notes](./releasenotes.md).
+ ## Speech Devices SDK 1.11.0: - Support for [arbitrary microphone array geometries](how-to-devices-microphone-array-configuration.md) and setting the working angle through a [configuration file](https://aka.ms/sdsdk-micarray-json).
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Speech-to-text has two different REST APIs. Each API serves its special purpose
The Speech-to-text REST APIs are: - [Speech-to-text REST API v3.0](#speech-to-text-rest-api-v30) is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md).-- [Speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio) is used for OnLine transcription as an alternative to the [Speech SDK](speech-sdk.md). Requests using this API can transmit only up to 60 seconds of audio per request.
+- [Speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio) is used for online transcription as an alternative to the [Speech SDK](speech-sdk.md). Requests using this API can transmit only up to 60 seconds of audio per request.
## Speech-to-text REST API v3.0
-Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). If you need to communicate with the OnLine transcription via REST, use [Speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio).
+Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). If you need to communicate with the online transcription via REST, use [Speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio).
Use REST API v3.0 to: - Copy models to other subscriptions in case you want colleagues to have access to a model you built, or in cases where you want to deploy a model to more than one region
cognitive-services Speech Container Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-faq.md
Just to clarify for the interactive, conversation, and dictation; this is an adv
This can all be verified from the docker logs. We actually dump the line with session and phrase/utterance statistics, and that includes the RTF numbers. -
-<br>
-</details>
-
-<details>
-<summary>
-<b>Is it common to split audio files into chucks for Speech container usage?</b>
-</summary>
-
-My current plan is to take an existing audio file and split it up into 10 second chunks and send those through the container. Is that an acceptable scenario? Is there a better way to process larger audio files with the container?
-
-**Answer:** Just use the speech SDK and give it the file, it will do the right thing. Why do you need to chunk the file?
-- <br> </details>
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Jump to [Text-to-Speech Quotas and limits](#text-to-speech-quotas-and-limits-per
In the tables below Parameters without "Adjustable" row are **not** adjustable for all price tiers. #### Online Transcription
+For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio).
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
cognitive-services Tutorial Power Bi Key Phrases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/tutorials/tutorial-power-bi-key-phrases.md
Now you'll use this column to generate a word cloud. To get started, click the *
> [!NOTE] > Why use extracted key phrases to generate a word cloud, rather than the full text of every comment? The key phrases provide us with the *important* words from our customer comments, not just the *most common* words. Also, word sizing in the resulting cloud isn't skewed by the frequent use of a word in a relatively small number of comments.
-If you don't already have the Word Cloud custom visual installed, install it. In the Visualizations panel to the right of the workspace, click the three dots (**...**) and choose **Import From Store**. Then search for "cloud" and click the **Add** button next the Word Cloud visual. Power BI installs the Word Cloud visual and lets you know that it installed successfully.
+If you don't already have the Word Cloud custom visual installed, install it. In the Visualizations panel to the right of the workspace, click the three dots (**...**) and choose **Import From Market**. If the word "cloud" is not among the displayed visualization tools in the list, you can search for "cloud" and click the **Add** button next the Word Cloud visual. Power BI installs the Word Cloud visual and lets you know that it installed successfully.
![[adding a custom visual]](../media/tutorials/power-bi/add-custom-visuals.png)<br><br>
First, click the Word Cloud icon in the Visualizations panel.
A new report appears in the workspace. Drag the `keyphrases` field from the Fields panel to the Category field in the Visualizations panel. The word cloud appears inside the report.
-Now switch to the Format page of the Visualizations panel. In the Stop Words category, turn on **Default Stop Words** to eliminate short, common words like "of" from the cloud.
+Now switch to the Format page of the Visualizations panel. In the Stop Words category, turn on **Default Stop Words** to eliminate short, common words like "of" from the cloud. However, because we're visualizing key phrases, they might not contain stop words.
![[activating default stop words]](../media/tutorials/power-bi/default-stop-words.png)
The Sentiment Analysis function below returns a score indicating how positive th
headers = [#"Ocp-Apim-Subscription-Key" = apikey], bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]), jsonresp = Json.Document(bytesresp),
- sentiment = jsonresp[documents]{0}[confidenceScores]
-in sentiment
+ sentiment = jsonresp[documents]{0}[detectedLanguage][confidenceScore] in sentiment
``` Here are two versions of a Language Detection function. The first returns the ISO language code (for example, `en` for English), while the second returns the "friendly" name (for example, `English`). You may notice that only the last line of the body differs between the two versions.
Here are two versions of a Language Detection function. The first returns the IS
headers = [#"Ocp-Apim-Subscription-Key" = apikey], bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]), jsonresp = Json.Document(bytesresp),
- language = jsonresp[documents]{0}[detectedLanguages]{0}[iso6391Name]
-in language
+ language = jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
``` ```fsharp // Returns the name (for example, 'English') of the language in which the text is written
in language
headers = [#"Ocp-Apim-Subscription-Key" = apikey], bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]), jsonresp = Json.Document(bytesresp),
- language = jsonresp[documents]{0}[detectedLanguages]{0}[name]
-in language
+ language jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
``` Finally, here's a variant of the Key Phrases function already presented that returns the phrases as a list object, rather than as a single string of comma-separated phrases.
Learn more about the Text Analytics service, the Power Query M formula language,
> [Power Query M reference](/powerquery-m/power-query-m-reference) > [!div class="nextstepaction"]
-> [Power BI documentation](https://powerbi.microsoft.com/documentation/powerbi-landing-page/)
+> [Power BI documentation](https://powerbi.microsoft.com/documentation/powerbi-landing-page/)
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
The following sections in this article provides a list of services that are part
|[Custom Vision Service](./custom-vision-service/overview.md "Custom Vision Service")|The Custom Vision Service allows you to build custom image classifiers.| |[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| |[Form Recognizer](./form-recognizer/index.yml "Form Recognizer")|Form Recognizer identifies and extracts key-value pairs and table data from form documents; then outputs structured data including the relationships in the original file.|
-|[Ink Recognizer](/previous-versions/azure/cognitive-services/Ink-Recognizer/ "Ink Recognizer") (Retiring)|Ink Recognizer allows you to recognize and analyze digital ink stroke data, shapes and handwritten content, and output a document structure with all recognized entities.|
|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video.| ## Speech APIs
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
# Authenticate to Azure Communication Services
-Every client interaction with Azure Communication Services needs to be authenticated. In a typical architecture, see [client and server architecture](./client-and-server-architecture.md), *access keys* or *managed identity* is used in the trusted user access service to create users and issue tokens. And *user access token* issued by the trusted user access service is used for client applications to access other communication services, for example, chat or calling service.
+Every client interaction with Azure Communication Services needs to be authenticated. In a typical architecture, see [client and server architecture](./client-and-server-architecture.md), *access keys* or *managed identities* are used for authentication.
-Azure Communication Services SMS service also accepts *access keys* or *managed identity* for authentication. This typically happens in a service application running in a trusted service environment.
+Another type of authentication uses *user access tokens* to authenticate against services that require user participation. For example, the chat or calling service utilizes *user access tokens* to allow users to be added in a thread and have conversations with each other.
+
+## Authentication Options:
+
+The following table shows the Azure Communication Services client libraries and their authentication options:
+
+| Client Library | Authentication option |
+| -- | -|
+| Identity | Access Key or Managed Identity |
+| SMS | Access Key or Managed Identity |
+| Phone Numbers | Access Key or Managed Identity |
+| Calling | User Access Token |
+| Chat | User Access Token |
Each authorization option is briefly described below: -- **Access Key** authentication for SMS and Identity operations. Access Key authentication is suitable for service applications running in a trusted service environment. Access key can be found in Azure Communication Services portal. To authenticate with an access key, a service application uses the access key as credential to initialize corresponding SMS or Identity client libraries, see [Create and manage access tokens](../quickstarts/access-tokens.md). Since access key is part of the connection string of your resource, see [Create and manage Communication Services resources](../quickstarts/create-communication-resource.md), authentication with connection string is equivalent to authentication with access key.-- **Managed Identity** authentication for SMS and Identity operations. Managed Identity, see [Managed Identity](../quickstarts/managed-identity.md), is suitable for service applications running in a trusted service environment. To authenticate with a managed identity, a service application creates a credential with the ID and a secret of the managed identity then initialize corresponding SMS or Identity client libraries, see [Create and manage access tokens](../quickstarts/access-tokens.md).-- **User Access Token** authentication for Chat and Calling. User access tokens let your client applications authenticate against Azure Communication Chat and Calling Services. These tokens are generated in a "trusted user access service" that you create. They're then provided to client devices that use the token to initialize the Chat and Calling client libraries. For more information, see [Add Chat to your App](../quickstarts/chat/get-started.md) for example.
+- **Access Key** authentication is suitable for service applications running in a trusted service environment. The access key can be found in Azure Communication Services portal and the service application uses it as the credential to initialize the corresponding client libraries. See an example with how it is used in the [Identity client library](../quickstarts/access-tokens.md). Since the access key is part of the connection string of your resource, authentication with a connection string is equivalent to authentication with an access key.
+
+- **Managed Identity** authentication provides superior security and ease of use over other authorization options. For example, by using Azure AD, you avoid having to store your account access key with your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible. To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/managed-identity-from-cli.md). Then, the endpoint and credentials can be used to authenticate the client libraries. See examples of how [managed identity](../quickstarts/managed-identity.md) is used.
+
+- **User Access Tokens** are generated using the Identity client library and are associated with users created in the Identity client library. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and managed identity authentication in that it is used to authenticate a user rather than a secured Azure resource.
## Next steps > [!div class="nextstepaction"] > [Create and manage Communication Services resources](../quickstarts/create-communication-resource.md) > [Create an Azure Active Directory managed identity application from the Azure CLI](../quickstarts/managed-identity-from-cli.md)
-> [Creating user access tokens](../quickstarts/access-tokens.md)
+> [Create User Access Tokens](../quickstarts/access-tokens.md)
For more information, see the following articles: - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
container-instances Container Instances Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-faq.md
See more [detailed guidance](container-instances-troubleshooting.md#container-ta
> [!NOTE] > Windows images based on Semi-Annual Channel release 1709 or 1803 are not supported.
-#### Windows Server 2019 and client base images (preview)
+#### Windows Server 2019 and client base images
* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver): `1809`, `10.0.17763.1040` or newer * [Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore): `ltsc2019`, `1809`, `10.0.17763.1040` or newer
See more [detailed guidance](container-instances-troubleshooting.md#container-ta
Use the smallest image that satisfies your requirements. For Linux, you could use a *runtime-alpine* .NET Core image, which has been supported since the release of .NET Core 2.1. For Windows, if you are using the full .NET Framework, then you need to use a Windows Server Core image (runtime-only image, such as *4.7.2-windowsservercore-ltsc2016*). Runtime-only images are smaller but do not support workloads that require the .NET SDK.
+> [!NOTE]
+> ACI cannot pull images from non OCI-compliant registries.
+ ### What types of container registries are compatible with ACI?
-ACI supports image pulls from ACR and other third-party container registries such as DockerHub. ACI also supports image pulls from on-premise registries as long as they are OCR-compatible and have an endpoint that is publicly exposed to the internet.
+ACI supports image pulls from ACR and other third-party container registries such as DockerHub. ACI supports image pulls from ACR and other third-party OCI compatible container registries such as DockerHub with an endpoint that is publicly exposed to the internet.
## Availability and quotas
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-overview.md
Azure Container Instances is a great solution for any scenario that can operate
Containers offer significant startup benefits over virtual machines (VMs). Azure Container Instances can start containers in Azure in seconds, without the need to provision and manage VMs.
-Bring Linux or Windows container images from Docker Hub, a private [Azure container registry](../container-registry/index.yml), or another cloud-based docker registry. Azure Container Instances caches several common base OS images, helping speed deployment of your custom application images.
-
-> [!NOTE]
-> Currently, you can't deploy an image from an on-premises registry to Azure Container Instances.
+Bring Linux or Windows container images from Docker Hub, a private [Azure container registry](../container-registry/index.yml), or another cloud-based docker registry. Visit the [FAQ](container-instances-faq.md) to learn which registries are supported by ACI. Azure Container Instances caches several common base OS images, helping speed deployment of your custom application images.
## Container access
Historically, containers have offered application dependency isolation and resou
### Customer data
-The ACI service stores the minimum customer data required to ensure your container groups are running as expected. Storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo. For all other regions, customer data is stored in [Geo](https://azure.microsoft.com/global-infrastructure/geographies/). Please get in touch with Azure Support to learn more.
+The ACI service stores the minimum customer data required to ensure your container groups are running as expected. Storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in [Geo](https://azure.microsoft.com/global-infrastructure/geographies/). Please get in touch with Azure Support to learn more.
## Custom sizes
Some features are currently restricted to Linux containers:
For Windows container deployments, use images based on common [Windows base images](container-instances-faq.md#what-windows-base-os-images-are-supported).
-> [!NOTE]
-> Use of Windows Server 2019-based images in Azure Container Instances is in preview.
- ## Co-scheduled groups Azure Container Instances supports scheduling of [multi-container groups](container-instances-container-groups.md) that share a host machine, local network, storage, and lifecycle. This enables you to combine your main application container with other supporting role containers, such as logging sidecars.
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| South Central US | 4 | 16 | 4 | 16 | 50 | N/A | | Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | | South India | 4 | 16 | N/A | N/A | 50 | N/A |
+| Switzerland North | 3 | 16 | N/A | N/A | 50 | N/A |
| UK South | 4 | 16 | 4 | 16 | 50 | N/A | | UAE North | 3 | 16 | N/A | N/A | 50 | N/A | | West Central US| 4 | 16 | 4 | 16 | 50 | N/A |
cosmos-db Cassandra Migrate Cosmos Db Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-migrate-cosmos-db-databricks.md
Previously updated : 11/16/2020 Last updated : 03/10/2021
There are various ways to migrate database workloads from one platform to anothe
## Provision an Azure Databricks cluster
-You can follow instructions to [Provision an Azure Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). However, please note Apache Spark 3.x is not currently supported for the Apache Cassandra Connector. You will need to provision a Databricks runtime with a supported v2.x version of Apache Spark. We recommend selecting a version of the Databricks runtime which supports the latest version of Spark 2.x, with no later than Scala version 2.11:
+You can follow instructions to [Provision an Azure Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). We recommend selecting Databricks runtime version 7.5, which supports Spark 3.0:
:::image type="content" source="./media/cassandra-migrate-cosmos-db-databricks/databricks-runtime.png" alt-text="Databricks runtime"::: ## Add dependencies
-You will need to add the Apache Spark Cassandra connector library to your cluster in order to connect to both native and Cosmos DB Cassandra endpoints. In your cluster select libraries -> install new -> maven -> search packages:
+You will need to add the Apache Spark Cassandra connector library to your cluster in order to connect to both native and Cosmos DB Cassandra endpoints. In your cluster select libraries -> install new -> maven. add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in maven coordinates:
:::image type="content" source="./media/cassandra-migrate-cosmos-db-databricks/databricks-search-packages.png" alt-text="Databricks search packages":::
-Type `Cassandra` in the search box, and select the latest `spark-cassandra-connector` maven repository available, then select install:
-
+Select install, and ensure you restart the cluster when installation is complete.
> [!NOTE] > Ensure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
val cosmosCassandra = Map(
"table" -> "<TABLE>", //throughput related settings below - tweak these depending on data volumes. "spark.cassandra.output.batch.size.rows"-> "1",
- "spark.cassandra.connection.connections_per_executor_max" -> "10",
"spark.cassandra.output.concurrent.writes" -> "1000", "spark.cassandra.concurrent.reads" -> "512", "spark.cassandra.output.batch.grouping.buffer.size" -> "1000",
DFfromNativeCassandra
.write .format("org.apache.spark.sql.cassandra") .options(cosmosCassandra)
+ .mode(SaveMode.Append)
.save ``` > [!NOTE]
-> The `spark.cassandra.output.batch.size.rows`, `spark.cassandra.output.concurrent.writes` and `connections_per_executor_max` configurations are important to avoid [rate limiting](/samples/azure-samples/azure-cosmos-cassandra-java-retry-sample/azure-cosmos-db-cassandra-java-retry-sample/), which happens when requests to Azure Cosmos DB exceed provisioned throughput/([request units](./request-units.md)). You may need to adjust these settings depending on the number of executors in the Spark cluster, and potentially the size (and therefore RU cost) of each record being written to the target tables.
+> The values for `spark.cassandra.output.batch.size.rows` and `spark.cassandra.output.concurrent.writes`, as well as the number of workers in your Spark cluster, are important configurations to tune in order to avoid [rate limiting](/samples/azure-samples/azure-cosmos-cassandra-java-retry-sample/azure-cosmos-db-cassandra-java-retry-sample/), which happens when requests to Azure Cosmos DB exceed provisioned throughput/([request units](./request-units.md)). You may need to adjust these settings depending on the number of executors in the Spark cluster, and potentially the size (and therefore RU cost) of each record being written to the target tables.
## Troubleshooting
You may see an error code of 429 or `request rate is large` error text, despite
- **Throughput allocated to the table is less than 6000 [request units](./request-units.md)**. Even at minimum settings, Spark will be able to execute writes at a rate of around 6000 request units or more. If you have provisioned a table in a keyspace with shared throughput provisioned, it is possible that this table has less than 6000 RUs available at runtime. Ensure the table you are migrating to has at least 6000 RUs available to it when running the migration, and if necessary allocate dedicated request units to that table. - **Excessive data skew with large data volume**. If you have a large amount of data (that is table rows) to migrate into a given table but have a significant skew in the data (i.e. a large number of records being written for the same partition key value), then you may still experience rate-limiting even if you have a large amount of [request units](./request-units.md) provisioned in your table. This is because request units are divided equally among physical partitions, and heavy data skew can result in a bottleneck of requests to a single partition, causing rate limiting. In this scenario, it is advised to reduce to minimal throughput settings in Spark to avoid rate limiting and force the migration to run slowly. This scenario can be more common when migrating reference or control tables, where access is less frequent but skew can be high. However, if a significant skew is present in any other type of table, it may also be advisable to review your data model to avoid hot partition issues for your workload during steady-state operations. -- **Unable to get count on large table**. Running `select count(*) from table` is not currently supported for large tables. You can get the count from metrics in Azure portal (see our [troubleshooting article](cassandra-troubleshoot.md)), but if you need to determine the count of a large table from within the context of a Spark job, you can copy the data to a temporary table and then use Spark SQL to get the count, e.g. below (replace `<primary key>` with some field from the resulting temporary table).-
- ```scala
- val ReadFromCosmosCassandra = sqlContext
- .read
- .format("org.apache.spark.sql.cassandra")
- .options(cosmosCassandra)
- .load
-
- ReadFromCosmosCassandra.createOrReplaceTempView("CosmosCassandraResult")
- %sql
- select count(<primary key>) from CosmosCassandraResult
- ```
++ ## Next steps
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-sql-api-dotnet-bulk-import.md
Previously updated : 09/21/2020 Last updated : 03/15/2021
Let's start by overwriting the default `Main` method and defining the global var
private const string AuthorizationKey = "<your-account-key>"; private const string DatabaseName = "bulk-tutorial"; private const string ContainerName = "items";
- private const int ItemsToInsert = 300000;
+ private const int AmountToInsert = 300000;
static async Task Main(string[] args) {
Next, create a helper function inside the `Program` class. This helper function
[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Bogus)]
-Read the items and serialize them into stream instances by using the `System.Text.Json` class. Because of the nature of the autogenerated data, you are serializing the data as streams. You can also use the item instance directly, but by converting them to streams, you can leverage the performance of stream APIs in the CosmosClient. Typically you can use the data directly as long as you know the partition key.
--
-To convert the data to stream instances, within the `Main` method, add the following code right after creating the container:
+Use the helper function to initialize a list of documents to work with:
[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=Operations)]
-Next use the data streams to create concurrent tasks and populate the task list to insert the items into the container. To perform this operation, add the following code to the `Program` class:
+Next use the list of documents to create concurrent tasks and populate the task list to insert the items into the container. To perform this operation, add the following code to the `Program` class:
[!code-csharp[Main](~/cosmos-dotnet-bulk-import/src/Program.cs?name=ConcurrentTasks)]
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
+
+ Title: Link a partner ID to your Power Apps accounts with your Azure credentials
+description: This article helps Microsoft partners use their Azure credentials to assist customers use Microsoft Power Apps.
+++++ Last updated : 03/16/2021+++
+# Link a partner ID to your Power Apps accounts
+
+This article helps Microsoft partners, who are Power Apps service providers, to associate their service to customers on Microsoft Power Apps. When you (the Microsoft partner) manage, configure, and support Power Apps services for your customer, you have access to your customer's environment. You can use your Azure credentials and a Partner Admin Link (PAL) to associate your partner network ID with the account credentials used for service delivery.
+
+The PAL allows Microsoft to identify and recognize partners that have Power Apps customers. Microsoft attributes usage to a partner's organization based on the account's permissions (Power Apps role) and scope (tenant, resource group, resource).
+
+## Get access from your customer
+
+Before you link your partner ID, your customer must give you access to their Power Apps resources by using one of the following options:
+
+- **Guest user** - Your customer can add you as a guest user and assign any Power Apps roles. For more information, see [Add guest users from another directory](../../active-directory/external-identities/what-is-b2b.md).
+- **Directory account** - Your customer can create a user account for you in their own directory and assign any Power Apps role.
+- **Service principal** - Your customer can add an app or script from your organization in their directory and assign any Power Apps role. The identity of the app or script is known as a service principal.
+- **Delegated Administrator** - Your customer can delegate a resource group so that your users can work on it from within your tenant. For more information, see [For partners: the Delegated Administrator](/power-platform/admin/for-partners-delegated-administrator).
+
+## Link customer to a partner ID
+
+When you have access to your customer's resources, use the Azure portal, PowerShell, or the Azure CLI to link your Microsoft Partner Network ID (MPN ID) to your user ID or service principal. Link the partner ID to each customer tenant.
+
+### Use the Azure portal to link to a new partner ID
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
+1. Enter the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner profile.
+ :::image type="content" source="./media/link-partner-id-power-apps-accounts/link-partner-id.png" alt-text="Screenshot showing the Link to a partner ID window." lightbox="./media/link-partner-id-power-apps-accounts/link-partner-id.png" :::
+1. To link your partner ID to another customer, switch the directory. Under **Switch directory**, select the appropriate directory.
+ :::image type="content" source="./media/link-partner-id-power-apps-accounts/switch-directory.png" alt-text="Screenshot showing the Directory + subscription window where can you switch your directory." lightbox="./media/link-partner-id-power-apps-accounts/switch-directory.png" :::
+
+### Use PowerShell to link to a new partner ID
+
+Install the [Az.ManagementPartner](https://www.powershellgallery.com/packages/Az.ManagementPartner/) Azure PowerShell module.
+
+Sign into the customer's tenant with either the user account or the service principal. For more information, see [Sign in with PowerShell](/powershell/azure/authenticate-azureps).
+
+```azurepowershell-interactive
+Update-AzManagementPartner -PartnerId 12345
+```
+
+Link to the new partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner profile.
+
+```azurepowershell-interactive
+new-AzManagementPartner -PartnerId 12345
+```
+
+Get the linked partner ID
+
+```azurepowershell-interactive
+get-AzManagementPartner
+```
+
+Update the linked partner ID
+
+```azurepowershell-interactive
+Update-AzManagementPartner -PartnerId 12345
+```
+
+Delete the linked partner ID
+
+```azurepowershell-interactive
+remove-AzManagementPartner -PartnerId 12345
+```
+
+### Use the Azure CLI to link to a new partner ID
+
+First, install the Azure CLI extension.
+
+```azurecli-interactive
+az extension add --name managementpartner
+```
+
+Sign into the customer's tenant with either the user account or the service principal. For more information, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+
+```azurecli-interactive
+az login --tenant TenantName
+```
+
+Link to the new partner ID. The partner ID is the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization.
+
+```azurecli-interactive
+az managementpartner create --partner-id 12345
+```
+
+Get the linked partner ID
+
+```azurecli-interactive
+az managementpartner show
+```
+
+Update the linked partner ID
+
+```azurecli-interactive
+az managementpartner update --partner-id 12345
+```
+
+Delete the linked partner ID
+
+```azurecli-interactive
+az managementpartner delete --partner-id 12345
+```
+
+## Frequently asked questions (FAQ)
+
+The following sections cover frequently asked questions about linking a partner ID to Power Apps accounts.
+
+### Who should link the partner ID?
+
+Any user from the partner organization who works on a customer's Power Apps resources can link the partner ID to the account. Ideally, the association in PAL should be done at the beginning of the project. However, it can be performed whenever you have access in the customer's directory.
+
+### Can a partner ID be changed after it's linked?
+
+Yes. A linked partner ID can be changed, added, or removed. One example for this situation might be when an employee from your company leaves your organization. Another example might be when a project or contract with the customer ends.
+
+### What if a user has an account in more than one customer tenant?
+
+The link between the partner ID and the account is done for each customer tenant. Link the partner ID in each customer tenant.
+
+### Can other partners or customers edit or remove the link to the partner ID?
+
+The link is associated at the user account level. Only you can edit or remove the link to the partner ID. The customer and other partners can't change the link to the partner ID.
+
+### Which MPN ID should I use if my company has multiple?
+
+Be sure to use the **Associated MPN ID** shown in your partner profile. It's usually the local account ID association with your organization.
+
+### How do I explain PAL to my customer?
+
+PAL enables Microsoft to identify and recognize those partners who are helping customers achieve business goals and realize value in the cloud. Customers must first provide a partner access to their Power Apps resource. Once access is granted, the partner's Microsoft Partner Network ID (MPN ID) is associated. This association helps Microsoft understand service providers and to refine the tools and programs needed to best support customers.
+
+### What data does PAL collect?
+
+The PAL association to existing credentials provides no new customer data to Microsoft. It provides the information to Microsoft where a partner is actively involved in a customer's Power Apps environments. Microsoft can attribute usage and influence from customer environment to partner organization based on the account's permissions (Power Apps role) and scope (tenant, Resource Group, Resource) provided to the partner by customer.
+
+### Does PAL association affect the security of a customer's Power Apps environment?
+
+PAL association only adds partner's MPN ID to the credential already provisioned. It doesn't alter any permissions (Power Apps role) or provide extra Power Apps service data to the partner or Microsoft.
+
+### Next steps
+
+- Join the discussion in the [Microsoft Partner Community](https://aka.ms/PALdiscussion) to receive updates or send feedback.
+- Read the [Low Code Application Development advanced specialization FAQ](https://assetsprod.microsoft.com/mpn/faq-low-code-app-development-advanced-specialization.pdf) for PAL-based Power Apps association for Low code application development advanced specialization.
cost-management-billing Upgrade Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/upgrade-azure-subscription.md
Title: Upgrade your Azure account
description: Learn how to upgrade your Azure free or Azure for Students Starter account. See additional information about Azure support plans. keywords: pay as you go upgrade -+ tags: billing Previously updated : 12/04/2020 Last updated : 03/11/2021 # Upgrade your Azure free account or Azure for Students Starter account
-You can upgrade your [Azure free account](https://azure.microsoft.com/free/) or [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) in the Azure portal.
+You can upgrade your [Azure free account](https://azure.microsoft.com/free/) to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) in the Azure portal.
-If you've signed up for an [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) and are eligible for an [Azure free account](https://azure.microsoft.com/free/), you can upgrade to a free account. You'll get $200 of Azure credits and 12 months of free services on upgrade.
+If you have an [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) and are eligible for an [Azure free account](https://azure.microsoft.com/free/), you can upgrade to it to a [Azure free account](https://azure.microsoft.com/free/). You'll get $200 of Azure credits and 12 months of free services on upgrade. If you don't qualify for a free account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+If you have an [Azure for Students](https://azure.microsoft.com/offers/ms-azr-0170p/) account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458)
<a id="freetrial"></a>
When you upgrade your Azure free account, you keep your remaining credit for the
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Subscriptions**.
- ![Screenshot that shows search](./media/upgrade-azure-subscription/search-subscriptions-ibiza.png)
+ :::image type="content" source="./media/upgrade-azure-subscription/search-subscriptions.png" alt-text="Screenshot that shows search." lightbox="./media/upgrade-azure-subscription/search-subscriptions.png" :::
1. Select the subscription that was created when you signed up for Azure free account.
-1. In the subscription overview, select **Upgrade subscription** in the command bar. If you don't see the upgrade subscription button, select the upgrade banner at the top of the page.
- ![Screenshot that shows upgrade button](./media/upgrade-azure-subscription/free-upgrade-button.png)
-1. If you don't have a payment method for your account, you'll be prompted to add one.
+1. In the subscription overview, select **Upgrade subscription** in the command bar. If you don't see the upgrade subscription option, select the upgrade banner at the top of the page.
+ :::image type="content" source="./media/upgrade-azure-subscription/free-upgrade-button.png" alt-text="Screenshot that shows the Upgrade option." lightbox="./media/upgrade-azure-subscription/free-upgrade-button.png" :::
+1. If you don't have a payment method for your account, you're prompted to add one.
1. You might need to enter a phone number to verify your identity. 1. Type a name for your subscription.
- ![Screenshot that shows name](./media/upgrade-azure-subscription/free-upgrade-name.png)
+ :::image type="content" source="./media/upgrade-azure-subscription/free-upgrade-name.png" alt-text="ALTScreenshot that shows the subscription name.TEXT" lightbox="./media/upgrade-azure-subscription/free-upgrade-name.png" :::
1. Choose a support plan for your subscription. To learn more about support plans, see [Azure support plans](https://azure.microsoft.com/us/support/plans/). 1. Select **Upgrade**.
If you're eligible, use the steps below to upgrade to an Azure free account.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Subscriptions**.
- ![Screenshot that shows search](./media/upgrade-azure-subscription/search-subscriptions-ibiza.png)
+ :::image type="content" source="./media/upgrade-azure-subscription/search-subscriptions.png" alt-text="Screenshot that shows search." lightbox="./media/upgrade-azure-subscription/search-subscriptions.png" :::
1. Select the subscription that was created when you signed up for your Azure for Students Starter account.
-1. In the subscription overview, select **Upgrade subscription** in the command bar.
- ![Screenshot that shows upgrade button for students](./media/upgrade-azure-subscription/student-upgrade-ibiza.png)
-
-### Upgrade to pay-as-you-go rates
-
-1. If you're upgrading to pay-as-you-go rates and don't already have a payment method for your subscription, you'll be prompted to add one.
-1. You might need to enter a phone number to verify your identity.
-1. Type a name for your subscription.
-1. Choose a support plan for your subscription. To learn more about support plans, see [Azure support plans](https://azure.microsoft.com/us/support/plans/).
-1. Select **Upgrade**.
+1. In the subscription overview, select **Upgrade** in the command bar.
+ :::image type="content" source="./media/upgrade-azure-subscription/student-upgrade.png" alt-text="Screenshot that shows upgrade option for students." lightbox="./media/upgrade-azure-subscription/student-upgrade.png" :::
## Next steps
-Now that you've upgraded your account, see [Plan to manage Azure costs](../understand/plan-manage-costs.md).
+- Now that you've upgraded your account, see [Plan to manage Azure costs](../understand/plan-manage-costs.md).
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 02/24/2021 Last updated : 03/16/2021
Azure has the following policies for cancellations, exchanges, and refunds.
- We're currently not charging an early termination fee, but in the future there might be a 12% early termination fee for cancellations. - The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, for a three-year reservation that's 100 USD per month and it's refunded in the 18th month, the canceled commitment is 1,800 USD. After the refund, your new available limit for refund will be 48,200 USD. In 365 days from the refund, the 48,200 USD limit will be increased by 1,800 USD and your new pool will be 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment will deplete the same pool, and the same replenishment logic will apply. - Azure won't process any refund that will exceed the 50,000 USD limit in a 12-month window for a billing profile or EA enrollment.
+ - Refunds that result from an exchange don't count against the refund limit.
- Refunds are calculated based on the lowest price of either your purchase price or the current price of the reservation. - Only reservation order owners can process a refund. [Learn how to Add or change users who can manage a reservation](manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-overview.md
Last updated 12/10/2020
# Mapping data flows in Azure Data Factory ## What are mapping data flows?
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 01/29/2021 Last updated : 03/15/2021 # Mapping data flows performance and tuning guide
In joins, lookups, and exists transformations, if one or both data streams are s
If the size of the broadcasted data is too large for the Spark node, you may get an out of memory error. To avoid out of memory errors, use **memory optimized** clusters. If you experience broadcast timeouts during data flow executions, you can switch off the broadcast optimization. However, this will result in slower performing data flows.
+When working with data sources that can take longer to query, like large database queries, it is recommended to turn broadcast off for joins. Source with long query times can cause Spark timeouts when the cluster attempts to broadcast to compute nodes. Another good choice for turning off broadcast is when you have a stream in your data flow that is aggregating values for use in a lookup transformation later. This pattern can confuse the Spark optimizer and cause timeouts.
+ ![Join Transformation optimize](media/data-flow/joinoptimize.png "Join Optimization") #### Cross joins
On the pipeline execute data flow activity under the "Sink Properties" section i
See other Data Flow articles related to performance: - [Data Flow activity](control-flow-execute-data-flow-activity.md)-- [Monitor Data Flow performance](concepts-data-flow-monitoring.md)
+- [Monitor Data Flow performance](concepts-data-flow-monitoring.md)
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Previously updated : 03/12/2021 Last updated : 03/15/2021 # Copy and transform data in Azure SQL Database by using Azure Data Factory
Settings specific to Azure SQL Database are available in the **Source Options**
**Query**: If you select Query in the input field, enter a SQL query for your source. This setting overrides any table that you've chosen in the dataset. **Order By** clauses aren't supported here, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table. This query will produce a source table that you can use in your data flow. Using queries is also a great way to reduce rows for testing or for lookups.
+**Stored procedure**: Choose this option if you wish to generate a projection and source data from a stored procedure that is executed from your source database. You can type in the schema, procedure name, and parameters, or click on Refresh to ask ADF to discover the schemas and procedure names. Then you can click on Import to import all procedure parameters using the form ``@paraName``.
+
+![Stored procedure](media/data-flow/stored-procedure-2.png "Stored Procedure")
+ - SQL Example: ```Select * from MyTable where customerId > 1000 and customerId < 2000```
+- Parameterized SQL Example: ``"select * from {$tablename} where orderyear > {$year}"``
**Batch size**: Enter a batch size to chunk large data into reads.
More specifically:
## Next steps
-For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
The following properties are supported for HTTP under `storeSettings` settings i
| | | -- | | type | The type property under `storeSettings` must be set to **HttpReadSettings**. | Yes | | requestMethod | The HTTP method. <br>Allowed values are **Get** (default) and **Post**. | No |
-| addtionalHeaders | Additional HTTP request headers. | No |
+| additionalHeaders | Additional HTTP request headers. | No |
| requestBody | The body for the HTTP request. | No | | httpRequestTimeout | The timeout (the **TimeSpan** value) for the HTTP request to get a response. This value is the timeout to get a response, not the timeout to read response data. The default value is **00:01:40**. | No | | maxConcurrentConnections | The number of the connections to connect to storage store concurrently. Specify only when you want to limit the concurrent connection to the data store. | No |
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
Previously updated : 12/08/2020 Last updated : 03/16/2021 # Copy and transform data in Snowflake by using Azure Data Factory
The following properties are supported for a Snowflake-linked service.
"properties": { "type": "Snowflake", "typeProperties": {
- "connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&db=<database>&warehouse=<warehouse>&role=<myRole>",
- "password": {
- "type": "SecureString",
- "value": "<password>"
- }
+ "connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&password=<password>&db=<database>&warehouse=<warehouse>&role=<myRole>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Collects all values of the expression in the aggregated group into an array. Str
___ ### <code>columnNames</code> <code><b>columnNames(<i>&lt;value1&gt;</i> : string) => array</b></code><br/><br/>
-Gets all output columns for a stream. You can pass an optional stream name as the second argument.
+Gets the names of all output columns for a stream. You can pass an optional stream name as the second argument.
* ``columnNames()`` * ``columnNames('DeriveStream')``- ___ ### <code>columns</code> <code><b>columns([<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
-Gets all output columns for a stream. You can pass an optional stream name as the second argument.
+Gets the values of all output columns for a stream. You can pass an optional stream name as the second argument.
* ``columns()`` * ``columns('DeriveStream')`` ___
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-dot-net.md
ms.devlang: dotnet Previously updated : 12/18/2020 Last updated : 03/16/2021 # Quickstart: Create a data factory and pipeline using .NET SDK
This quickstart describes how to use .NET SDK to create an Azure Data Factory. T
The walkthrough in this article uses Visual Studio 2019. The procedures for Visual Studio 2013, 2015, or 2017 differ slightly.
-### Azure .NET SDK
-
-Download and install [Azure .NET SDK](https://azure.microsoft.com/downloads/) on your machine.
- ## Create an application in Azure Active Directory From the sections in *How to: Use the portal to create an Azure AD application and service principal that can access resources*, follow the instructions to do these tasks:
data-factory Scenario Dataflow Process Data Aml Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scenario-dataflow-process-data-aml-models.md
ms.co- - # Process data from automated machine learning(AutoML) models using data flow + Automated machine learning(AutoML) is adopted by machine learning projects to train, tune and gain best model automatically using target metric you specify for classification, regression and time-series forecasting. One challenge is raw data from data warehouse or transactional database would be huge dataset, such as: 10GB, the large dataset requires longer time to train models, so optimize data processing is recommended before training Azure Machine Learning models. This tutorial will go through how to use ADF to partition dataset to parquet files for Azure Machine Learning dataset.
databox-gateway Data Box Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-gateway/data-box-gateway-overview.md
Previously updated : 08/21/2019 Last updated : 03/15/2021 #Customer intent: As an IT admin, I need to understand what Data Box Gateway is and how it works so I can use it to send data to Azure.
The Data Box Gateway solution comprises of Data Box Gateway resource, Data Box G
## Region availability
-Data Box Gateway physical device, Azure resource, and target storage account to which you transfer data do not all have to be in the same region.
+Data Box Gateway device, Azure resource, and target storage account to which you transfer data do not all have to be in the same region.
- **Resource availability** - For a list of all the regions where the Azure Data Box Gateway resource is available, go to [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=databox). Data Box Gateway can also be deployed in the Azure Government Cloud. For more information, see [What is Azure Government?](../azure-government/documentation-government-welcome.md).
Data Box Gateway physical device, Azure resource, and target storage account to
- Review the [Data Box Gateway system requirements](data-box-gateway-system-requirements.md). - Understand the [Data Box Gateway limits](data-box-gateway-limits.md).-- Deploy [Azure Data Box Gateway](data-box-gateway-deploy-prep.md) in Azure portal.
+- Deploy [Azure Data Box Gateway](data-box-gateway-deploy-prep.md) in Azure portal.
databox-online Azure Stack Edge Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-deploy-prep.md
Previously updated : 01/22/2021 Last updated : 03/16/2021 Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro so I can use it to transfer data to Azure. # Tutorial: Prepare to deploy Azure Stack Edge Pro
-This is the first tutorial in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
+This is the first tutorial in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
-You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
+You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
In this tutorial, you learn how to:
Before you begin, make sure that:
* For normal operating conditions of your Azure Stack Edge Pro, you have:
- * A minimum of 10-Mbps download bandwidth to ensure the device stays updated.
- * A minimum of 20-Mbps dedicated upload and download bandwidth to transfer files.
+ * A minimum of 10 Mbps download bandwidth to ensure the device stays updated.
+ * A minimum of 20 Mbps dedicated upload and download bandwidth to transfer files.
-## Create a new resource
+## Create new resource for existing device
-If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
+If you're an existing Azure Stack Edge Pro customer, use the following procedure to create a new resource if you need to replace or reset your existing device.
-### [Portal](#tab/azure-portal)
+If you're a new customer, we recommend that you explore using Azure Stack Edge Pro - GPU devices for your workloads. For more information, go to [What is Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-overview.md). For information about ordering an Azure Stack Edge Pro with GPU device, go to [Create a new resource for Azure Stack Edge Pro - GPU](azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal#create-a-new-resource).
-To create an Azure Stack Edge resource, take the following steps in the Azure portal.
+To create a new Azure Stack Edge Pro resource for an existing device, take the following steps in the Azure portal.
-1. Use your Microsoft Azure credentials to sign in to
+1. Use your Microsoft Azure credentials to sign in to:
- The Azure portal at this URL: [https://portal.azure.com](https://portal.azure.com). - Or, the Azure Government portal at this URL: [https://portal.azure.us](https://portal.azure.us). For more details, go to [Connect to Azure Government using the portal](../azure-government/documentation-government-get-started-connect-with-portal.md).
-2. In the left-pane, select **+ Create a resource**. Search for and select **Azure Stack Edge / Data Box Gateway**. Select **Create**.
-3. Pick the subscription that you want to use for the Azure Stack Edge Pro device. Select the region where you want to deploy the Azure Stack Edge resource. For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all).
+1. Select **+ Create a resource**. Search for and select **Azure Stack Edge**. Then select **Create**.
- Choose a location closest to the geographical region where you want to deploy your device. The region stores only the metadata for device management. The actual data can be stored in any storage account.
-
- In the **Azure Stack Edge Pro** option, select **Create**.
-
- ![Search Azure Stack Edge service](media/azure-stack-edge-deploy-prep/data-box-edge-sku.png)
-
-3. On the **Basics** tab, enter or select the following **Project details**.
-
- |Setting |Value |
- |||
- |Subscription |This value is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
- |Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). |
-
-4. Enter or select the following **Instance details**.
-
- |Setting |Value |
- |||
- |Name | A friendly name to identify the resource.<br>The name has from 2 and 50 characters, including letters, numbers, and hyphens.<br> Name starts and ends with a letter or a number. |
- |Region |For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).<br> Choose a location closest to the geographical region where you want to deploy your device.|
+1. Select the subscription for the Azure Stack Edge Pro device and the country to ship the device to in **Ship to**.
- ![Project and instance details](media/azure-stack-edge-deploy-prep/data-box-edge-resource.png)
+ ![Select the subscription and ship-to country for your device](media/azure-stack-edge-deploy-prep/create-fpga-existing-resource-01.png)
-5. Select **Next: Shipping address**.
- - If you already have a device, select the combo box for **I have an Azure Stack Edge device**.
- - If this is the new device that you are ordering, enter the contact name, company, address to ship the device, and contact information.
+1. In the list of device types that is displayed, select **Azure Stack Edge Pro - FPGA**. Then choose **Select**.
- ![Shipping address for new device](media/azure-stack-edge-deploy-prep/data-box-edge-resource1.png)
+ The **Azure Stack Edge Pro - FPGA** device type is only displayed if you have an existing device. If you need to order a new device, go to [Create a new resource for Azure Stack Edge Pro - GPU](azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal#create-a-new-resource).
-6. Select **Next: Review + create**.
+ ![Search Azure Stack Edge service](media/azure-stack-edge-deploy-prep/create-fpga-existing-resource-02.png)
-7. On the **Review + create** tab, review the **Pricing details**, **Terms of use**, and the details for your resource. Select the combo box for **I have reviewed the privacy terms**.
+1. On the **Basics** tab:
- ![Review Azure Stack Edge resource details and privacy terms](media/azure-stack-edge-deploy-prep/data-box-edge-resource2.png)
-
-8. Select **Create**.
-
- The resource creation takes a few minutes. After the resource is successfully created and deployed, you're notified. Select **Go to resource**.
-
- ![Go to the Azure Stack Edge resource](media/azure-stack-edge-deploy-prep/data-box-edge-resource3.png)
-
-After the order is placed, Microsoft reviews the order and contacts you (via email) with shipping details.
-
-![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-deploy-prep/data-box-edge-resource4.png)
-
-> [!NOTE]
-> If you want to create multiple orders at one time or clone an existing order, you can use the [scripts in Azure Samples](https://github.com/Azure-Samples/azure-stack-edge-order). For more information, see the README file.
-
-### [Azure CLI](#tab/azure-cli)
-
-If necessary, prepare your environment for Azure CLI.
--
-To create an Azure Stack Edge resource, run the following commands in Azure CLI.
-
-1. Create a resource group by using the [az group create](/cli/azure/group#az_group_create) command, or use an existing resource group:
+ 1. Enter or select the following **Project details**.
+
+ |Setting |Value |
+ |||
+ |Subscription |This value is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
+ |Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). |
- ```azurecli
- az group create --name myasepgpu1 --location eastus
- ```
+ 1. Enter or select the following **Instance details**.
-1. To create a device, use the [az databoxedge device create](/cli/azure/databoxedge/device#az_databoxedge_device_create) command:
+ |Setting |Value |
+ |||
+ |Name | A friendly name to identify the resource.<br>The name has from 2 and 50 characters, including letters, numbers, and hyphens.<br> Name starts and ends with a letter or a number. |
+ |Region |For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).<br> Choose a location closest to the geographical region where you want to deploy your device.|
- ```azurecli
- az databoxedge device create --resource-group myasepgpu1 \
- --device-name myasegpu1 --location eastus --sku Edge
- ```
+ 1. Select **Review + create**.
- Choose a location closest to the geographical region where you want to deploy your device. The region stores only the metadata for device management. The actual data can be stored in any storage account.
+ ![Project and instance details](media/azure-stack-edge-deploy-prep/create-fpga-existing-resource-03.png)
- For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
+1. On the **Review + create** tab, review the **Terms of use**, **Pricing details**, and the details for your resource. Then select **Create**.
-1. To create an order, run the [az databoxedge order create](/cli/azure/databoxedge/order#az_databoxedge_order_create) command:
+ ![Review Azure Stack Edge resource details and privacy terms](media/azure-stack-edge-deploy-prep/create-fpga-existing-resource-04.png)
- ```azurecli
- az databoxedge order create --resource-group myasepgpu1 \
- --device-name myasegpu1 --company-name "Contoso" \
- --address-line1 "1020 Enterprise Way" --city "Sunnyvale" \
- --state "California" --country "United States" --postal-code 94089 \
- --contact-person "Gus Poland" --email-list gus@contoso.com --phone 4085555555
- ```
+1. The resource creation takes a few minutes. After the resource is successfully created and deployed, you're notified. Select **Go to resource**.
-The resource creation takes a few minutes. Run the [az databoxedge order show](/cli/azure/databoxedge/order#az_databoxedge_order_show) command to see the order:
+ ![Go to the Azure Stack Edge resource](media/azure-stack-edge-deploy-prep/data-box-edge-resource-01.png)
-```azurecli
-az databoxedge order show --resource-group myasepgpu1 --device-name myasegpu1
-```
+After the order is placed, Microsoft reviews the order and contacts you (via email) with shipping details.
-After you place an order, Microsoft reviews the order and contacts you by email with shipping details.
+![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-deploy-prep/data-box-edge-resource-02.png)
- ## Get the activation key
After the Azure Stack Edge resource is up and running, you'll need to get the ac
1. Go to the resource that you created, and select **Overview**. You'll see a notification to the effect that your order is being processed.
- ![Select Overview](media/azure-stack-edge-deploy-prep/data-box-edge-select-devicesetup.png)
+ ![Select Overview](media/azure-stack-edge-deploy-prep/data-box-edge-select-device-setup.png)
2. After the order is processed and the device is on your way, the **Overview** updates. Accept the default **Azure Key Vault name** or enter a new one. Select **Generate activation key**. Select the copy icon to copy the key and save it for later use.
databox-online Azure Stack Edge Gpu Configure Gpu Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-configure-gpu-modules.md
Previously updated : 03/08/2021 Last updated : 03/12/2021 # Configure and run a module on GPU on Azure Stack Edge Pro device
To configure a module to use the GPU on your Azure Stack Edge Pro device to run
For more information on environment variables that you can use with the Nvidia GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec). > [!NOTE]
- > A GPU can only be mapped to one module. A module can however use one, both or no GPUs.
+ > A module can use one, both or no GPUs.
12. Enter a name for your module. At this point you can choose to provide container create option and modify module twin settings or if done, select **Add**.
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
The following table has a brief description of the commands available for `ioted
|`logs` | Fetch the logs of a module | |`restart` | Stop and restart a module |
+#### List all IoT Edge modules
To list all the modules running on your device, use the `iotedge list` command.
webserverapp Running Up 10 days nginx:stable
[10.100.10.10]: PS> ```
+#### Restart modules
+You can use the `list` command to list all the modules running on your device. Then identify the name of the module that you want to restart and use it with the `restart` command.
+
+Here is a sample output of how to restart a module. Based on the description of how long the module is running for, you can see that `cuda-sample1` was restarted.
+
+```powershell
+[10.100.10.10]: PS>iotedge list
+
+NAME STATUS DESCRIPTION CONFIG EXTERNAL-IP PORT(S)
+- -- -- -
+edgehub Running Up 5 days mcr.microsoft.com/azureiotedge-hub:1.0 10.57.48.62 443:31457/TCP,5671:308
+ 81/TCP,8883:31753/TCP
+iotedged Running Up 7 days azureiotedge/azureiotedge-iotedged:0.1.0-beta13 <none> 35000/TCP,35001/TCP
+cuda-sample2 Running Up 1 days nvidia/samples:nbody
+edgeagent Running Up 7 days azureiotedge/azureiotedge-agent:0.1.0-beta13
+cuda-sample1 Running Up 1 days nvidia/samples:nbody
+
+[10.100.10.10]: PS>iotedge restart cuda-sample1
+[10.100.10.10]: PS>iotedge list
+
+NAME STATUS DESCRIPTION CONFIG EXTERNAL-IP PORT(S)
+- -- -- -
+edgehub Running Up 5 days mcr.microsoft.com/azureiotedge-hub:1.0 10.57.48.62 443:31457/TCP,5671:30
+ 881/TCP,8883:31753/TC
+ P
+iotedged Running Up 7 days azureiotedge/azureiotedge-iotedged:0.1.0-beta13 <none> 35000/TCP,35001/TCP
+cuda-sample2 Running Up 1 days nvidia/samples:nbody
+edgeagent Running Up 7 days azureiotedge/azureiotedge-agent:0.1.0-beta13
+cuda-sample1 Running Up 4 minutes nvidia/samples:nbody
+
+[10.100.10.10]: PS>
+
+```
+
+#### Get module logs
+
+Use the `logs` command to get logs for any IoT Edge module running on your device.
+
+If there was an error in creation of the container image or while pulling the image, run `logs edgeagent`. `edgeagent` is the IoT Edge runtime container that is responsible for provisioning other containers. Because `logs edgeagent` dumps all the logs, a good way to see the recent errors is to use the option `--tail `0`.
+
+Here is a sample output.
+
+```powershell
+[10.100.10.10]: PS>iotedge logs cuda-sample2 --tail 10
+[10.100.10.10]: PS>iotedge logs edgeagent --tail 10
+<6> 2021-02-25 00:52:54.828 +00:00 [INF] - Executing command: "Report EdgeDeployment status: [Success]"
+<6> 2021-02-25 00:52:54.829 +00:00 [INF] - Plan execution ended for deployment 11
+<6> 2021-02-25 00:53:00.191 +00:00 [INF] - Plan execution started for deployment 11
+<6> 2021-02-25 00:53:00.191 +00:00 [INF] - Executing command: "Create an EdgeDeployment with modules: [cuda-sample2, edgeAgent, edgeHub, cuda-sample1]"
+<6> 2021-02-25 00:53:00.212 +00:00 [INF] - Executing command: "Report EdgeDeployment status: [Success]"
+<6> 2021-02-25 00:53:00.212 +00:00 [INF] - Plan execution ended for deployment 11
+<6> 2021-02-25 00:53:05.319 +00:00 [INF] - Plan execution started for deployment 11
+<6> 2021-02-25 00:53:05.319 +00:00 [INF] - Executing command: "Create an EdgeDeployment with modules: [cuda-sample2, edgeAgent, edgeHub, cuda-sample1]"
+<6> 2021-02-25 00:53:05.412 +00:00 [INF] - Executing command: "Report EdgeDeployment status: [Success]"
+<6> 2021-02-25 00:53:05.412 +00:00 [INF] - Plan execution ended for deployment 11
+[10.100.10.10]: PS>
+```
### Use kubectl commands
databox-online Azure Stack Edge Gpu Deploy Iot Edge Gpu Sharing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md
+
+ Title: Deploy IoT Edge workload using GPU sharing on Azure Stack Edge Pro GPU device
+description: Describes how you can deploy a GPU shared workload via IoT Edge on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/12/2021++
+# Deploy an IoT Edge workload using GPU sharing on your Azure Stack Edge Pro
+
+This article describes how containerized workloads can share the GPUs on your Azure Stack Edge Pro GPU device. The approach involves enabling the Multi-Process Service (MPS) and then specifying the GPU workloads via an IoT Edge deployment.
+
+## Prerequisites
+
+Before you begin, make sure that:
+
+1. You've access to an Azure Stack Edge Pro GPU device that is [activated](azure-stack-edge-gpu-deploy-activate.md) and has [compute configured](azure-stack-edge-gpu-deploy-configure-compute.md). You have the [Kubernetes API endpoint](azure-stack-edge-gpu-deploy-configure-compute.md#get-kubernetes-endpoints) and you have added this endpoint to the `hosts` file on your client that will be accessing the device.
+
+1. You've access to a client system with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). If using a Windows client, the system should run PowerShell 5.0 or later to access the device.
+
+1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/https://docsupdatetracker.net/index.html#running-simple-containers) that are publicly available from Nvidia.
+
+ ```json
+ {
+ "modulesContent": {
+ "$edgeAgent": {
+ "properties.desired": {
+ "modules": {
+ "cuda-sample1": {
+ "settings": {
+ "image": "nvidia/samples:nbody",
+ "createOptions": "{\"Entrypoint\":[\"/bin/sh\"],\"Cmd\":[\"-c\",\"/tmp/nbody -benchmark -i=1000; while true; do echo no-op; sleep 10000;done\"],\"HostConfig\":{\"IpcMode\":\"host\",\"PidMode\":\"host\"}}"
+ },
+ "type": "docker",
+ "version": "1.0",
+ "env": {
+ "NVIDIA_VISIBLE_DEVICES": {
+ "value": "0"
+ }
+ },
+ "status": "running",
+ "restartPolicy": "never"
+ },
+ "cuda-sample2": {
+ "settings": {
+ "image": "nvidia/samples:nbody",
+ "createOptions": "{\"Entrypoint\":[\"/bin/sh\"],\"Cmd\":[\"-c\",\"/tmp/nbody -benchmark -i=1000; while true; do echo no-op; sleep 10000;done\"],\"HostConfig\":{\"IpcMode\":\"host\",\"PidMode\":\"host\"}}"
+ },
+ "type": "docker",
+ "version": "1.0",
+ "env": {
+ "NVIDIA_VISIBLE_DEVICES": {
+ "value": "0"
+ }
+ },
+ "status": "running",
+ "restartPolicy": "never"
+ }
+ },
+ "runtime": {
+ "settings": {
+ "minDockerVersion": "v1.25"
+ },
+ "type": "docker"
+ },
+ "schemaVersion": "1.1",
+ "systemModules": {
+ "edgeAgent": {
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.0",
+ "createOptions": ""
+ },
+ "type": "docker"
+ },
+ "edgeHub": {
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.0",
+ "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"443/tcp\":[{\"HostPort\":\"443\"}],\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}"
+ },
+ "type": "docker",
+ "status": "running",
+ "restartPolicy": "always"
+ }
+ }
+ }
+ },
+ "$edgeHub": {
+ "properties.desired": {
+ "routes": {
+ "route": "FROM /messages/* INTO $upstream"
+ },
+ "schemaVersion": "1.1",
+ "storeAndForwardConfiguration": {
+ "timeToLiveSecs": 7200
+ }
+ }
+ },
+ "cuda-sample1": {
+ "properties.desired": {}
+ },
+ "cuda-sample2": {
+ "properties.desired": {}
+ }
+ }
+ }
+ ```
+
+## Verify GPU driver, CUDA version
+
+The first step is to verify that your device is running required GPU driver and CUDA versions.
+
+1. [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+1. Run the following command:
+
+ `Get-HcsGpuNvidiaSmi`
+
+1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
+
+ - GPU driver version: 460.32.03
+ - CUDA version: 11.2
+
+ Here is an example output:
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Tue Feb 23 10:34:01 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 0000041F:00:00.0 Off | 0 |
+ | N/A 40C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | No running processes found |
+ +--+
+ [10.100.10.10]: PS>
+ ```
+
+1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
++
+## Deploy without context-sharing
+
+You can now deploy an application on your device when the Multi-Process Service is not running and there is no context-sharing. The deployment is via the Azure portal in the `iotedge` namespace that exists on your device.
+
+### Create user in IoT Edge namespace
+
+First you'll create a user that will connect to the `iotedge` namespace. The IoT Edge modules are deployed in the iotedge namespace. For more information, see [Kubernetes namespaces on your device](azure-stack-edge-gpu-kubernetes-rbac.md#namespaces-types).
+
+Follow these steps to create a user and grant user the access to the `iotedge` namespace.
+
+1. [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+1. Create a new user in the `iotedge` namespace. Run the following command:
+
+ `New-HcsKubernetesUser -UserName <user name>`
+
+ Here is an example output:
+
+ ```powershell
+ [10.100.10.10]: PS>New-HcsKubernetesUser -UserName iotedgeuser
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data:
+ ===========================//snipped //======================// snipped //=============================
+ server: https://compute.myasegpudev.wdshcsso.com:6443
+ name: kubernetes
+ contexts:
+ - context:
+ cluster: kubernetes
+ user: iotedgeuser
+ name: iotedgeuser@kubernetes
+ current-context: iotedgeuser@kubernetes
+ kind: Config
+ preferences: {}
+ users:
+ - name: iotedgeuser
+ user:
+ client-certificate-data:
+ ===========================//snipped //======================// snipped //=============================
+ client-key-data:
+ ===========================//snipped //======================// snipped ============================
+ PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
+ ```
+
+1. Copy the output displayed in plain text. Save the output as a *config* file (with no extension) in the `.kube` folder of your user profile on your local machine, for example, `C:\Users\<username>\.kube`.
+
+1. Grant the user that you created, access to the `iotedge` namespace. Run the following command:
+
+ `Grant-HcsKubernetesNamespaceAccess -Namespace iotedge -UserName <user name>`
+
+ Here is an example output:
+
+ ```python
+ [10.100.10.10]: PS>Grant-HcsKubernetesNamespaceAccess -Namespace iotedge -UserName iotedgeuser
+ [10.100.10.10]: PS>
+ ```
+For detailed instructions, see [Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-create-kubernetes-cluster.md#configure-cluster-access-via-kubernetes-rbac).
+
+### Deploy modules via portal
+
+Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available Nvidia CUDA sample modules that run n-body simulation. For more information, see [N-body simulation](https://physics.princeton.edu//~fpretori/Nbody/intro.htm).
+
+1. Make sure that the IoT Edge service is running on your device.
+
+ ![IoT Edge service running.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-1.png)
+
+1. Select the IoT Edge tile in the right-pane. Go to **IoT Edge > Properties**. In the right-pane, select the IoT Hub resource associated with your device.
+
+ ![View properties.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-2.png)
+
+1. In the IoT Hub resource, go to **Automatic Device Management > IoT Edge**. In the right-pane, select the IoT Edge device associated with your device.
+
+ ![Go to IoT Edge.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-3.png)
+
+1. Select **Set modules**.
+
+ ![Go to Set Modules.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-4.png)
+
+1. Select **+ Add > + IoT Edge module**.
+
+ ![Add IoT Edge module.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-5.png)
+
+1. On the **Module Settings** tab, provide the **IoT Edge module name** and **Image URI**. Set **Image pull policy** to **On create**.
+
+ ![Module settings.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-6.png)
+1. On the **Environment Variables** tab, specify **NVIDIA_VISIBLE_DEVICES** as **0**.
+
+ ![Environment variables.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-7.png)
+
+1. On the **Container Create Options** tab, provide the following options:
+
+ ```json
+ {
+ "Entrypoint": [
+ "/bin/sh"
+ ],
+ "Cmd": [
+ "-c",
+ "/tmp/nbody -benchmark -i=1000; while true; do echo no-op; sleep 10000;done"
+ ],
+ "HostConfig": {
+ "IpcMode": "host",
+ "PidMode": "host"
+ }
+ }
+ ```
+ The options are displayed as follows:
+
+ ![Container create options.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-8.png)
+
+ Select **Add**.
+
+1. The module that you added should show as **Running**.
+
+ ![Review and create deployment.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-9.png)
++
+1. Repeat all the steps to add a module that you followed when adding the first module. In this example, provide the name of the module as `cuda-sample2`.
+
+ ![Module settings for 2nd module.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-12.png)
+
+ Use the same environment variable as both the modules will share the same GPU.
+
+ ![Environment variable for 2nd module.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-13.png)
+
+ Use the same container create options that you provided for the first module and select **Add**.
+
+ ![Container create options for 2nd modules.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-14.png)
+
+1. On the **Set modules** page, select **Review + Create** and then select **Create**.
+
+ ![Review and create 2nd deployment.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-15.png)
+
+1. The **Runtime status** of both the modules should now show as **Running**.
+
+ ![2nd deployment status.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/gpu-sharing-deploy-16.png)
++
+### Monitor workload deployment
+
+1. Open a new PowerShell session.
+
+1. List the pods running in the `iotedge` namespace. Run the following command:
+
+ `kubectl get pods -n iotedge`
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n iotedge --kubeconfig C:\GPU-sharing\kubeconfigs\configiotuser1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-869989578c-ssng8 2/2 Running 0 5s
+ cuda-sample2-6db6d98689-d74kb 2/2 Running 0 4s
+ edgeagent-79f988968b-7p2tv 2/2 Running 0 6d21h
+ edgehub-d6c764847-l8v4m 2/2 Running 0 24h
+ iotedged-55fdb7b5c6-l9zn8 1/1 Running 1 6d21h
+ PS C:\WINDOWS\system32>
+ ```
+ There are two pods, `cuda-sample1-97c494d7f-lnmns` and `cuda-sample2-d9f6c4688-2rld9` running on your device.
+
+1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
+
+ Here is an example output when both the containers are running the n-body simulation:
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Fri Mar 5 13:31:16 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 52C P0 69W / 70W | 221MiB / 15109MiB | 100% Default |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | 0 N/A N/A 188342 C /tmp/nbody 109MiB |
+ | 0 N/A N/A 188413 C /tmp/nbody 109MiB |
+ +--+
+ [10.100.10.10]: PS>
+ ```
+ As you can see, there are two containers running with n-body simulation on GPU 0. You can also view their corresponding memory usage.
+
+1. Once the simulation has completed, the Nvidia smi output will show that there are no processes running on the device.
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Fri Mar 5 13:54:48 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | No running processes found |
+ +--+
+ [10.100.10.10]: PS>
+ ```
+
+1. After the n-body simulation has completed, view the logs to understand the details of the deployment and the time required for the simulation to complete.
+
+ Here is an example output from the first container:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl -n iotedge --kubeconfig C:\GPU-sharing\kubeconfigs\configiotuser1 logs cuda-sample1-869989578c-ssng8 cuda-sample1
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ==============// snipped //===================// snipped //=============
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 170171.531 ms
+ = 98.590 billion interactions per second
+ = 1971.801 single-precision GFLOP/s at 20 flops per interaction
+ no-op
+ PS C:\WINDOWS\system32>
+ ```
+ Here is an example output from the second container:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl -n iotedge --kubeconfig C:\GPU-sharing\kubeconfigs\configiotuser1 logs cuda-sample2-6db6d98689-d74kb cuda-sample2
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ==============// snipped //===================// snipped //=============
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 170054.969 ms
+ = 98.658 billion interactions per second
+ = 1973.152 single-precision GFLOP/s at 20 flops per interaction
+ no-op
+ PS C:\WINDOWS\system32>
+ ```
+
+1. Stop the module deployment. In the IoT Hub resource for your device:
+ 1. Go to **Automatic Device Deployment > IoT Edge**. Select the IoT Edge device corresponding to your device.
+
+ 1. Go to **Set modules** and select a module.
+
+ ![Select Set module.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/stop-module-deployment-1.png)
+
+ 1. On the **Modules** tab, select a module.
+
+ ![Select a module.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/stop-module-deployment-2.png)
+
+ 1. On the **Module settings** tab, set **Desired status** to stopped. Select **Update**.
+
+ ![Modify module settings.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/stop-module-deployment-3.png)
+
+ 1. Repeat the steps to stop the second module deployed on the device. Select **Review + create** and then select **Create**. This should update the deployment.
+
+ ![Review and create updated deployment.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/stop-module-deployment-6.png)
+
+ 1. Refresh **Set modules** page multiple times. until the module **Runtime status** shows as **Stopped**.
+
+ ![Verify deployment status.](media/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing/stop-module-deployment-8.png)
+
+
+## Deploy with context-sharing
+
+You can now deploy the n-body simulation on two CUDA containers when MPS is running on your device. First, you'll enable MPS on the device.
++
+1. [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md).
+
+1. To enable MPS on your device, run the `Start-HcsGpuMPS` command.
+
+ ```powershell
+ [10.100.10.10]: PS>Start-HcsGpuMPS
+ K8S-1HXQG13CL-1HXQG13:
+ Set compute mode to EXCLUSIVE_PROCESS for GPU 0000191E:00:00.0.
+ All done.
+ Created nvidia-mps.service
+ [10.100.10.10]: PS>
+ ```
+1. Get the Nvidia smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
+
+ Here is an example output:
+
+ ```yml
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Thu Mar 4 12:37:39 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 36C P8 9W / 70W | 28MiB / 15109MiB | 0% E. Process |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | 0 N/A N/A 122792 C nvidia-cuda-mps-server 25MiB |
+ +--+
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ ```
+
+1. Deploy the modules that you stopped earlier. Set the **Desired status** to running via **Set modules**.
+
+ Here is the example output:
+
+ ```yml
+ PS C:\WINDOWS\system32> kubectl get pods -n iotedge --kubeconfig C:\GPU-sharing\kubeconfigs\configiotuser1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-869989578c-2zxh6 2/2 Running 0 44s
+ cuda-sample2-6db6d98689-fn7mx 2/2 Running 0 44s
+ edgeagent-79f988968b-7p2tv 2/2 Running 0 5d20h
+ edgehub-d6c764847-l8v4m 2/2 Running 0 27m
+ iotedged-55fdb7b5c6-l9zn8 1/1 Running 1 5d20h
+ PS C:\WINDOWS\system32>
+ ```
+ You can see that the modules are deployed and running on your device.
+
+1. When the modules are deployed, the n-body simulation also starts running on both the containers. Here is the example output when the simulation has completed on the first container:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl -n iotedge logs cuda-sample1-869989578c-2zxh6 cuda-sample1
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ==============// snipped //===================// snipped //=============
+
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 155256.062 ms
+ = 108.062 billion interactions per second
+ = 2161.232 single-precision GFLOP/s at 20 flops per interaction
+ no-op
+ PS C:\WINDOWS\system32>
+ ```
+ Here is the example output when the simulation has completed on the second container:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl -n iotedge --kubeconfig C:\GPU-sharing\kubeconfigs\configiotuser1 logs cuda-sample2-6db6d98689-fn7mx cuda-sample2
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ==============// snipped //===================// snipped //=============
+
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 155366.359 ms
+ = 107.985 billion interactions per second
+ = 2159.697 single-precision GFLOP/s at 20 flops per interaction
+ no-op
+ PS C:\WINDOWS\system32>
+ ```
+
+1. Get the Nvidia smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Thu Mar 4 12:59:44 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 54C P0 69W / 70W | 242MiB / 15109MiB | 100% E. Process |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | 0 N/A N/A 56832 M+C /tmp/nbody 107MiB |
+ | 0 N/A N/A 56900 M+C /tmp/nbody 107MiB |
+ | 0 N/A N/A 122792 C nvidia-cuda-mps-server 25MiB |
+ +--+
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ ```
+
+## Next steps
+
+- [Deploy a shared GPU Kubernetes workload on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md).
databox-online Azure Stack Edge Gpu Deploy Kubernetes Gpu Sharing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md
+
+ Title: Deploy Kubernetes workload using GPU sharing on Azure Stack Edge Pro GPU device
+description: Describes how you can deploy a GPU shared workload via Kubernetes on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/12/2021++
+# Deploy a Kubernetes workload using GPU sharing on your Azure Stack Edge Pro
+
+This article describes how containerized workloads can share the GPUs on your Azure Stack Edge Pro GPU device. In this article, you will run two jobs, one without the GPU context-sharing and one with the context-sharing enabled via the the Multi-Process Service (MPS) on the device. For more information, see the [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf).
+
+## Prerequisites
+
+Before you begin, make sure that:
+
+1. You've access to an Azure Stack Edge Pro GPU device that is [activated](azure-stack-edge-gpu-deploy-activate.md) and has [compute configured ](azure-stack-edge-gpu-deploy-configure-compute.md). You have the [Kubernetes API endpoint](azure-stack-edge-gpu-deploy-configure-compute.md#get-kubernetes-endpoints) and you have added this endpoint to the `hosts` file on your client that will be accessing the device.
+
+1. You've access to a client system with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). If using a Windows client, the system should run PowerShell 5.0 or later to access the device.
+
+1. You have created a namespace and a user. You have also granted user the access to this namespace. You have the kubeconfig file of this namespace installed on the client system that you'll use to access your device. For detailed instructions, see [Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-create-kubernetes-cluster.md#configure-cluster-access-via-kubernetes-rbac).
+
+1. Save the following deployment `yaml` on your local system. You'll use this file to run Kubernetes deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/https://docsupdatetracker.net/index.html#running-simple-containers) that are publicly available from Nvidia.
+
+ ```yml
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ name: cuda-sample1
+ spec:
+ template:
+ spec:
+ hostPID: true
+ hostIPC: true
+ containers:
+ - name: cuda-sample-container1
+ image: nvidia/samples:nbody
+ command: ["/tmp/nbody"]
+ args: ["-benchmark", "-i=1000"]
+ env:
+ - name: NVIDIA_VISIBLE_DEVICES
+ value: "0"
+ restartPolicy: "Never"
+ backoffLimit: 1
+
+
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ name: cuda-sample2
+ spec:
+ template:
+ metadata:
+ spec:
+ hostPID: true
+ hostIPC: true
+ containers:
+ - name: cuda-sample-container2
+ image: nvidia/samples:nbody
+ command: ["/tmp/nbody"]
+ args: ["-benchmark", "-i=1000"]
+ env:
+ - name: NVIDIA_VISIBLE_DEVICES
+ value: "0"
+ restartPolicy: "Never"
+ backoffLimit: 1
+ ```
+
+## Verify GPU driver, CUDA version
+
+The first step is to verify that your device is running required GPU driver and CUDA versions.
+
+1. [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+1. Run the following command:
+
+ ```powershell
+ Get-HcsGpuNvidiaSmi
+ ```
+
+1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
+
+ - GPU driver version: 460.32.03
+ - CUDA version: 11.2
+
+ Here is an example output:
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Wed Mar 3 12:24:27 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | No running processes found |
+ +--+
+ [10.100.10.10]: PS>
+ ```
+
+1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
+++
+## Job without context-sharing
+
+You'll run the first job to deploy an application on your device in the namespace `mynamesp1`. This application deployment will also show that the GPU context-sharing is not enabled by default.
+
+1. List all the pods running in the namespace. Run the following command:
+
+ ```powershell
+ kubectl get pods -n <Name of the namespace>
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ No resources found.
+ ```
+1. Start a deployment job on your device using the deployment.yaml provided earlier. Run the following command:
+
+ ```powershell
+ kubectl apply -f <Path to the deployment .yaml> -n <Name of the namespace>
+ ```
+
+ This job creates two containers and runs an n-body simulation on both the containers. The number of simulation iterations are specified in the `.yaml`. For more information, see [N-body simulation](https://physics.princeton.edu//~fpretori/Nbody/intro.htm).
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl apply -f -n mynamesp1 C:\gpu-sharing\k8-gpusharing.yaml
+ job.batch/cuda-sample1 created
+ job.batch/cuda-sample2 created
+ PS C:\WINDOWS\system32>
+ ```
+
+1. To list the pods started in the deployment, run the following command:
+
+ ```powershell
+ kubectl get pods -n <Name of the namespace>
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-27srm 1/1 Running 0 28s
+ cuda-sample2-db9vx 1/1 Running 0 27s
+ PS C:\WINDOWS\system32>
+ ```
+
+ There are two pods, `cuda-sample1-cf979886d-xcwsq` and `cuda-sample2-68b4899948-vcv68` running on your device.
+
+1. Fetch the details of the pods. Run the following command:
+
+ ```powershell
+ kubectl -n <Name of the namespace> describe <Name of the job>
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl -n mynamesp1 describe job.batch/cuda-sample1; kubectl -n mynamesp1 describe job.batch/cuda-sample2
+ Name: cuda-sample1
+ Namespace: mynamesp1
+ Selector: controller-uid=22783f76-6af1-490d-b6eb-67dd4cda0e1f
+ Labels: controller-uid=22783f76-6af1-490d-b6eb-67dd4cda0e1f
+ job-name=cuda-sample1
+ Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"cuda-sample1","namespace":"mynamesp1"},"spec":{"backoffLimit":1...
+ Parallelism: 1
+ Completions: 1
+ Start Time: Wed, 03 Mar 2021 12:25:34 -0800
+ Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
+ Pod Template:
+ Labels: controller-uid=22783f76-6af1-490d-b6eb-67dd4cda0e1f
+ job-name=cuda-sample1
+ Containers:
+ cuda-sample-container1:
+ Image: nvidia/samples:nbody
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /tmp/nbody
+ Args:
+ -benchmark
+ -i=10000
+ Environment:
+ NVIDIA_VISIBLE_DEVICES: 0
+ Mounts: <none>
+ Volumes: <none>
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal SuccessfulCreate 60s job-controller Created pod: cuda-sample1-27srm
+ Name: cuda-sample2
+ Namespace: mynamesp1
+ Selector: controller-uid=e68c8d5a-718e-4880-b53f-26458dc24381
+ Labels: controller-uid=e68c8d5a-718e-4880-b53f-26458dc24381
+ job-name=cuda-sample2
+ Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"cuda-sample2","namespace":"mynamesp1"},"spec":{"backoffLimit":1...
+ Parallelism: 1
+ Completions: 1
+ Start Time: Wed, 03 Mar 2021 12:25:35 -0800
+ Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
+ Pod Template:
+ Labels: controller-uid=e68c8d5a-718e-4880-b53f-26458dc24381
+ job-name=cuda-sample2
+ Containers:
+ cuda-sample-container2:
+ Image: nvidia/samples:nbody
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /tmp/nbody
+ Args:
+ -benchmark
+ -i=10000
+ Environment:
+ NVIDIA_VISIBLE_DEVICES: 0
+ Mounts: <none>
+ Volumes: <none>
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal SuccessfulCreate 60s job-controller Created pod: cuda-sample2-db9vx
+ PS C:\WINDOWS\system32>
+ ```
+ The output indicates that both the pods were successfully created by the job.
+
+1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
+
+ Here is an example output when both the containers are running the n-body simulation:
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Wed Mar 3 12:26:41 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 64C P0 69W / 70W | 221MiB / 15109MiB | 100% Default |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | 0 N/A N/A 197976 C /tmp/nbody 109MiB |
+ | 0 N/A N/A 198051 C /tmp/nbody 109MiB |
+ +--+
+ [10.100.10.10]: PS>
+ ```
+ As you can see, there are two containers (Type = C) running with n-body simulation on GPU 0.
+
+1. Monitor the n-body simulation. Run the `get pod` commands. Here is an example output when the simulation is running.
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-27srm 1/1 Running 0 70s
+ cuda-sample2-db9vx 1/1 Running 0 69s
+ PS C:\WINDOWS\system32>
+ ```
+
+ When the simulation is complete, the output will indicate that. Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-27srm 0/1 Completed 0 2m54s
+ cuda-sample2-db9vx 0/1 Completed 0 2m53s
+ PS C:\WINDOWS\system32>
+ ```
+
+1. After the simulation is complete, you can view the logs and the total time for the completion of the simulation. Run the following command:
+
+ ```powershell
+ kubectl logs -n <Name of the namespace> <pod name>
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl logs -n mynamesp1 cuda-sample1-27srm
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ===========// CUT //===================// CUT //=====================
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 170398.766 ms
+ = 98.459 billion interactions per second
+ = 1969.171 single-precision GFLOP/s at 20 flops per interaction
+ PS C:\WINDOWS\system32>
+ ```
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl logs -n mynamesp1 cuda-sample2-db9vx
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ===========// CUT //===================// CUT //=====================
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 170368.859 ms
+ = 98.476 billion interactions per second
+ = 1969.517 single-precision GFLOP/s at 20 flops per interaction
+ PS C:\WINDOWS\system32>
+ ```
+1. There should be no processes running on the GPU at this time. You can verify this by viewing the GPU utilization using the Nvidia smi output.
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Wed Mar 3 12:32:52 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 00002C74:00:00.0 Off | 0 |
+ | N/A 38C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | No running processes found |
+ +--+
+ [10.100.10.10]: PS>
+ ```
+
+## Job with context-sharing
+
+You'll run the second job to deploy the n-body simulation on two CUDA containers when GPU context-sharing is enabled though the MPS. First, you'll enable MPS on the device.
+
+1. [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md).
+
+1. To enable MPS on your device, run the `Start-HcsGpuMPS` command.
+
+ ```powershell
+ [10.100.10.10]: PS>Start-HcsGpuMPS
+ K8S-1HXQG13CL-1HXQG13:
+
+ Set compute mode to EXCLUSIVE_PROCESS for GPU 00002C74:00:00.0.
+ All done.
+ Created nvidia-mps.service
+ [10.100.10.10]: PS>
+ ```
+1. Run the job using the same deployment `yaml` you used earlier. You may need to delete the existing deployment. See [Delete deployment](#delete-deployment).
+
+ Here is an example output:
+
+ ```yml
+ PS C:\WINDOWS\system32> kubectl -n mynamesp1 delete -f C:\gpu-sharing\k8-gpusharing.yaml
+ job.batch "cuda-sample1" deleted
+ job.batch "cuda-sample2" deleted
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ No resources found.
+ PS C:\WINDOWS\system32> kubectl -n mynamesp1 apply -f C:\gpu-sharing\k8-gpusharing.yaml
+ job.batch/cuda-sample1 created
+ job.batch/cuda-sample2 created
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-vcznt 1/1 Running 0 21s
+ cuda-sample2-zkx4w 1/1 Running 0 21s
+ PS C:\WINDOWS\system32> kubectl -n mynamesp1 describe job.batch/cuda-sample1; kubectl -n mynamesp1 describe job.batch/cuda-sample2
+ Name: cuda-sample1
+ Namespace: mynamesp1
+ Selector: controller-uid=ed06bdf0-a282-4b35-a2a0-c0d36303a35e
+ Labels: controller-uid=ed06bdf0-a282-4b35-a2a0-c0d36303a35e
+ job-name=cuda-sample1
+ Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"cuda-sample1","namespace":"mynamesp1"},"spec":{"backoffLimit":1...
+ Parallelism: 1
+ Completions: 1
+ Start Time: Wed, 03 Mar 2021 21:51:51 -0800
+ Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
+ Pod Template:
+ Labels: controller-uid=ed06bdf0-a282-4b35-a2a0-c0d36303a35e
+ job-name=cuda-sample1
+ Containers:
+ cuda-sample-container1:
+ Image: nvidia/samples:nbody
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /tmp/nbody
+ Args:
+ -benchmark
+ -i=10000
+ Environment:
+ NVIDIA_VISIBLE_DEVICES: 0
+ Mounts: <none>
+ Volumes: <none>
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal SuccessfulCreate 46s job-controller Created pod: cuda-sample1-vcznt
+ Name: cuda-sample2
+ Namespace: mynamesp1
+ Selector: controller-uid=6282b8fa-e76d-4f45-aa85-653ee0212b29
+ Labels: controller-uid=6282b8fa-e76d-4f45-aa85-653ee0212b29
+ job-name=cuda-sample2
+ Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"cuda-sample2","namespace":"mynamesp1"},"spec":{"backoffLimit":1...
+ Parallelism: 1
+ Completions: 1
+ Start Time: Wed, 03 Mar 2021 21:51:51 -0800
+ Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
+ Pod Template:
+ Labels: controller-uid=6282b8fa-e76d-4f45-aa85-653ee0212b29
+ job-name=cuda-sample2
+ Containers:
+ cuda-sample-container2:
+ Image: nvidia/samples:nbody
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /tmp/nbody
+ Args:
+ -benchmark
+ -i=10000
+ Environment:
+ NVIDIA_VISIBLE_DEVICES: 0
+ Mounts: <none>
+ Volumes: <none>
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal SuccessfulCreate 47s job-controller Created pod: cuda-sample2-zkx4w
+ PS C:\WINDOWS\system32>
+ ```
+
+1. While the simulation is running, you can view the Nvidia smi output. The output shows processes corresponding to the cuda containers (M + C type) with n-body simulation and the MPS service (C type) as running. All these processes share GPU 0.
+
+ ```powershell
+ PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Mon Mar 3 21:54:50 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 0000E00B:00:00.0 Off | 0 |
+ | N/A 45C P0 68W / 70W | 242MiB / 15109MiB | 100% E. Process |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | 0 N/A N/A 144377 M+C /tmp/nbody 107MiB |
+ | 0 N/A N/A 144379 M+C /tmp/nbody 107MiB |
+ | 0 N/A N/A 144443 C nvidia-cuda-mps-server 25MiB |
+ +--+
+ ```
+
+1. After the simulation is complete, you can view the logs and the total time for the completion of the simulation. Run the following command:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n mynamesp1
+ NAME READY STATUS RESTARTS AGE
+ cuda-sample1-vcznt 0/1 Completed 0 5m44s
+ cuda-sample2-zkx4w 0/1 Completed 0 5m44s
+ PS C:\WINDOWS\system32> kubectl logs -n mynamesp1 cuda-sample1-vcznt
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ===========// CUT //===================// CUT //=====================
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 154979.453 ms
+ = 108.254 billion interactions per second
+ = 2165.089 single-precision GFLOP/s at 20 flops per interaction
++
+ PS C:\WINDOWS\system32> kubectl logs -n mynamesp1 cuda-sample2-zkx4w
+ Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
+ ===========// CUT //===================// CUT //=====================
+ > Windowed mode
+ > Simulation data stored in video memory
+ > Single precision floating point simulation
+ > 1 Devices used for simulation
+ GPU Device 0: "Turing" with compute capability 7.5
+
+ > Compute 7.5 CUDA device: [Tesla T4]
+ 40960 bodies, total time for 10000 iterations: 154986.734 ms
+ = 108.249 billion interactions per second
+ = 2164.987 single-precision GFLOP/s at 20 flops per interaction
+ PS C:\WINDOWS\system32>
+ ```
+1. After the simulation is complete, you can view the Nvidia smi output again. Only the nvidia-cuda-mps-server process for the MPS service shows as running.
+
+ ```powershell
+ PS>Get-HcsGpuNvidiaSmi
+ K8S-1HXQG13CL-1HXQG13:
+
+ Mon Mar 3 21:59:55 2021
+ +--+
+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
+ |-+-+-+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 On | 0000E00B:00:00.0 Off | 0 |
+ | N/A 37C P8 9W / 70W | 28MiB / 15109MiB | 0% E. Process |
+ | | | N/A |
+ +-+-+-+
+
+ +--+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | 0 N/A N/A 144443 C nvidia-cuda-mps-server 25MiB |
+ +--+
+ ```
+
+## Delete deployment
+
+You may need to delete deployments when running with MPS enabled and with MPS disable on your device.
+
+To delete the deployment on your device, run the following command:
+
+```powershell
+kubectl delete -f <Path to the deployment .yaml> -n <Name of the namespace>
+```
+
+Here is an example output:
+
+```powershell
+PS C:\WINDOWS\system32> kubectl delete -f 'C:\gpu-sharing\k8-gpusharing.yaml' -n mynamesp1
+deployment.apps "cuda-sample1" deleted
+deployment.apps "cuda-sample2" deleted
+PS C:\WINDOWS\system32>
+```
+
+## Next steps
+
+- [Deploy an IoT Edge workload with GPU sharing on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md).
databox-online Azure Stack Edge Gpu Sharing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-sharing.md
+
+ Title: GPU sharing on Azure Stack Edge Pro GPU device
+description: Describes the approaches to sharing GPUs on Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/05/2021+++
+# GPU sharing on your Azure Stack Edge Pro GPU device
+
+Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two Nvidia Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [Nvidia's Tesla T4 GPU](https://www.nvidia.com/data-center/tesla-t4/).
++
+## About GPU sharing
+
+Many machine learning or other compute workloads may not need a dedicated GPU. GPUs can be shared and sharing GPUs among containerized or VM workloads helps increase the GPU utilization without significantly affecting the performance benefits of GPU.
+
+## Using GPU with VMs
+
+On your Azure Stack Edge Pro device, a GPU can't be shared when deploying VM workloads. A GPU can only be mapped to one VM. This implies that you can only have one GPU VM on a device with one GPU and two VMs on a device that is equipped with two GPUs. There are other factors that must also be considered when using GPU VMs on a device that has Kubernetes configured for containerized workloads. For more information, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#gpu-vms-and-kubernetes).
++
+## Using GPU with containers
+
+If you are deploying containerized workloads, a GPU can be shared in more than one ways at the hardware and software layer. With the Tesla T4 GPU on your Azure Stack Edge Pro device, we are limited to software sharing. On your device, the following two approaches for software sharing of GPU are used:
+
+- The first approach involves using environment variables to specify the number of GPUs that can be time shared. Consider the following caveats when using this approach:
+
+ - You can specify one or both or no GPUs with this method. It is not possible to specify fractional usage.
+ - Multiple modules can map to one GPU but the same module cannot be mapped to more than one GPU.
+ - With the Nvidia SMI output, you can see the overall GPU utilization including the memory utilization.
+
+ For more information, see how to [Deploy an IoT Edge module that uses GPU](azure-stack-edge-gpu-configure-gpu-modules.md) on your device.
+
+- The second approach requires you to enable the Multi-Process Service on your Nvidia GPUs. MPS is a runtime service that lets multiple processes using CUDA to run concurrently on a single shared GPU. MPS allows overlapping of kernel and memcopy operations from different processes on the GPU to achieve maximum utilization. For more information, see [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf).
+
+ Consider the following caveats when using this approach:
+
+ - MPS allows you to specify more flags in GPU deployment.
+ - You can specify fractional usage via MPS thereby limiting the usage of each application deployed on the device. You can specify the GPU percentage to use for each app under the `env` section of the `deployment.yaml` by adding the following parameter:
+
+ ```yml
+ // Example: application wants to limit gpu percentage to 20%
+
+ env:
+ - name: CUDA_MPS_ACTIVE_THREAD_PERCENTAGE
+ value: "20"
+ ```
+
+## GPU utilization
+
+When you share GPU on containerized workloads deployed on your device, you can use the Nvidia System Management Interface (nvidia-smi). Nvidia-smi is a command-line utility that helps you manage and monitor Nvidia GPU devices. For more information, see [Nvidia System Management Interface](https://developer.nvidia.com/nvidia-system-management-interface).
+
+To view GPU usage, first connect to the PowerShell interface of the device. Run the `Get-HcsNvidiaSmi` command and view the Nvidia SMI output. You can also view how the GPU utilization changes by enabling MPS and then deploying multiple workloads on the device. For more information, see [Enable Multi-Process Service](azure-stack-edge-gpu-connect-powershell-interface.md#enable-multi-process-service-mps).
++
+## Next steps
+
+- [GPU sharing for Kubernetes deployments on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md).
+- [GPU sharing for IoT deployments on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md).
databox-online Azure Stack Edge Migrate Fpga Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
Previously updated : 02/10/2021 Last updated : 03/11/2021 # Migrate workloads from an Azure Stack Edge Pro FPGA to an Azure Stack Edge Pro GPU
-This article describes how to migrate workloads and data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. The migration procedure involves an overview of migration including a comparison between the two devices, migration considerations, detailed steps, and verification followed by cleanup.
+This article describes how to migrate workloads and data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. The migration process begins with a comparison of the two devices, a migration plan, and a review of migration considerations. The migration procedure gives detailed steps ending with verification and device cleanup.
-<!--Azure Stack Edge Pro FPGA devices will reach end-of-life in February 2024. If you are considering new deployments, we recommend that you explore Azure Stack Edge Pro GPU devices for your workloads.-->
## About migration Migration is the process of moving workloads and application data from one storage location to another. This entails making an exact copy of an organizationΓÇÖs current data from one storage device to another storage device - preferably without disrupting or disabling active applications - and then redirecting all input/output (I/O) activity to the new device.
-This migration guide provides a step-by-step walkthrough of the steps required to migrate data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. This document is intended for information technology (IT) professionals and knowledge workers who are responsible for operating, deploying, and managing Azure Stack Edge devices in the datacenter.
+This migration guide provides a step-by-step walkthrough of the steps required to migrate data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. This document is intended for information technology (IT) professionals and knowledge workers who are responsible for operating, deploying, and managing Azure Stack Edge devices in the datacenter.
In this article, the Azure Stack Edge Pro FPGA device is referred to as the *source* device and the Azure Stack Edge Pro GPU device is the *target* device. ## Comparison summary
-This section provides a comparative summary of capabilities between the Azure Stack Edge Pro GPU vs. the Azure Stack Edge Pro FPGA devices. The hardware in both the source and the target device is largely identical and differs only with respect to the hardware acceleration card and the storage capacity.
+This section provides a comparative summary of capabilities between the Azure Stack Edge Pro GPU vs. the Azure Stack Edge Pro FPGA devices. The hardware in both the source and the target device is largely identical; only the hardware acceleration card and the storage capacity may differ.<!--Please verify: These components MAY, but need not necessarily, differ?-->
| Capability | Azure Stack Edge Pro GPU (Target device) | Azure Stack Edge Pro FPGA (Source device)| |-|--||
-| Hardware | Hardware acceleration: 1 or 2 Nvidia T4 GPUs <br> Compute, memory, network interface, power supply unit, power cord specifications are identical to the device with FPGA. | Hardware acceleration: Intel Arria 10 FPGA <br> Compute, memory, network interface, power supply unit, power cord specifications are identical to the device with GPU. |
+| Hardware | Hardware acceleration: 1 or 2 Nvidia T4 GPUs <br> Compute, memory, network interface, power supply unit, and power cord specifications are identical to the device with FPGA. | Hardware acceleration: Intel Arria 10 FPGA <br> Compute, memory, network interface, power supply unit, and power cord specifications are identical to the device with GPU. |
| Usable storage | 4.19 TB <br> After reserving space for parity resiliency and internal use | 12.5 TB <br> After reserving space for internal use | | Security | Certificates | | | Workloads | IoT Edge workloads <br> VM workloads <br> Kubernetes workloads| IoT Edge workloads |
To create your migration plan, consider the following information:
Before you proceed with the migration, consider the following information: -- An Azure Stack Edge Pro GPU device can't be activated against an Azure Stack Edge Pro FPGA resource. A new resource should be created for the Azure Stack Edge Pro GPU device as described in the [Create an Azure Stack Edge Pro GPU order](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
+- An Azure Stack Edge Pro GPU device can't be activated against an Azure Stack Edge Pro FPGA resource. You should create a new resource for the Azure Stack Edge Pro GPU device as described in [Create an Azure Stack Edge Pro GPU order](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
- The Machine Learning models deployed on the source device that used the FPGA will need to be changed for the target device with GPU. For help with the models, you can contact Microsoft Support. The custom models deployed on the source device that did not use the FPGA (used CPU only) should work as-is on the target device (using CPU).-- The IoT Edge modules deployed on the source device may require changes before these can be successfully deployed on the target device.
+- The IoT Edge modules deployed on the source device may require changes before the modules can be successfully deployed on the target device.
- The source device supports NFS 3.0 and 4.1 protocols. The target device only supports NFS 3.0 protocol.-- The source device support SMB and NFS protocols. The target device supports storage via REST protocol using storage accounts in addition to SMB and NFS protocols for shares.
+- The source device support SMB and NFS protocols. The target device supports storage via the REST protocol using storage accounts in addition to the SMB and NFS protocols for shares.
- The share access on the source device is via the IP address whereas the share access on the target device is via the device name. ## Migration steps at-a-glance
Edge cloud shares tier data from your device to Azure. Do these steps on your *s
- Make a list of all the Edge cloud shares and users that you have on the source device. - Make a list of all the bandwidth schedules that you have. You will recreate these bandwidth schedules on your target device.-- Depending on the network bandwidth available, configure bandwidth schedules on your device so as to maximize the data tiered to the cloud. This would minimize the local data on the device.-- Ensure that the shares are fully tiered to the cloud. This can be confirmed by checking the share status in the Azure portal.
+- Depending on the network bandwidth available, configure bandwidth schedules on your device to maximize the data tiered to the cloud. That minimizes the local data on the device.
+- Ensure that the shares are fully tiered to the cloud. The tiering can be confirmed by checking the share status in the Azure portal.
#### Data in Edge local shares Data in Edge local shares stays on the device. Do these steps on your *source* device via the Azure portal. -- Make a list of the Edge local shares that you have on the device.-- Given this is one-time migration of the data, create a copy of the Edge local share data to another on-premises server. You can use copy tools such as `robocopy` (SMB) or `rsync` (NFS) to copy the data. Optionally you may have already deployed a third-party data protection solution to back up the data in your local shares. The following third-party solutions are supported for use with Azure Stack Edge Pro FPGA devices:
+- Make a list of the Edge local shares on the device.
+- Since you'll be doing a one-time migration of the data, create a copy of the Edge local share data to another on-premises server. You can use copy tools such as `robocopy` (SMB) or `rsync` (NFS) to copy the data. Optionally you may have already deployed a third-party data protection solution to back up the data in your local shares. The following third-party solutions are supported for use with Azure Stack Edge Pro FPGA devices:
| Third-party software | Reference to the solution | |--||
You will now copy data from the source device to the Edge cloud shares and Edge
Follow these steps to sync the data on the Edge cloud shares on your target device:
-1. [Add shares](azure-stack-edge-gpu-manage-shares.md#add-a-share) corresponding to the share names created on the source device. Make sure that while creating shares, **Select blob container** is set to **Use existing** option and then select the container that was used with the previous device.
-1. [Add users](azure-stack-edge-gpu-manage-users.md#add-a-user) that had access to the previous device.
-1. [Refresh the share](azure-stack-edge-gpu-manage-shares.md#refresh-shares) data from Azure. This pulls down all the cloud data from the existing container to the shares.
-1. Recreate the bandwidth schedules to be associated with your shares. See [Add a bandwidth schedule](azure-stack-edge-gpu-manage-bandwidth-schedules.md#add-a-schedule) for detailed steps.
+1. [Add shares](azure-stack-edge-j-series-manage-shares.md#add-a-share) corresponding to the share names created on the source device. When you create the shares, make sure that **Select blob container** is set to **Use existing**, and then select the container that was used with the previous device.
+1. [Add users](azure-stack-edge-j-series-manage-users.md#add-a-user) that had access to the previous device.
+1. [Refresh the share](azure-stack-edge-j-series-manage-shares.md#refresh-shares) data from Azure. Refreshing the share will pull down all the cloud data from the existing container to the shares.
+1. Recreate the bandwidth schedules to be associated with your shares. See [Add a bandwidth schedule](azure-stack-edge-j-series-manage-bandwidth-schedules.md#add-a-schedule) for detailed steps.
### 2. From Edge local shares
Follow these steps to recover the data from local shares:
1. Add all the local shares on the target device. See the detailed steps in [Add a local share](azure-stack-edge-gpu-manage-shares.md#add-a-local-share). 1. Accessing the SMB shares on the source device will use the IP addresses whereas on the target device, you'll use device name. See [Connect to an SMB share on Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-shares.md#connect-to-an-smb-share). To connect to NFS shares on the target device, you'll need to use the new IP addresses associated with the device. See [Connect to an NFS share on Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-shares.md#connect-to-an-nfs-share).
- If you copied over your share data to an intermediate server over SMB/NFS, you can copy this data over to shares on the target device. You can also copy the data over directly from the source device if both the source and the target device are *online*.
+ If you copied your share data to an intermediate server over SMB or NFS, you can copy the data from the intermediate server to shares on the target device. If both the source and the target device are *online*, you can also copy the data directly from the source device.
- If you had used a third-party software to back up the data in the local shares, you will need to run the recovery procedure provided by the data protection solution of choice. See references in the following table.
+ If you have used third-party software to back up the data in the local shares, you will need to run the recovery procedure that's provided by the data protection solution of choice. See references in the following table.
| Third-party software | Reference to the solution | |--||
databox-online Azure Stack Edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-overview.md
Previously updated : 09/09/2020 Last updated : 03/15/2021 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro is and how it works so I can use it to process and transform data before sending to Azure. # What is Azure Stack Edge Pro with FPGA?
-Azure Stack Edge Pro with FPGA is an AI-enabled edge computing device with network data transfer capabilities. This article provides you an overview of the Azure Stack Edge Pro with FPGA solution, benefits, key capabilities, and the scenarios where you can deploy this device.
+Azure Stack Edge Pro with FPGA is an AI-enabled edge computing device with network data transfer capabilities. This article provides you an overview of the Azure Stack Edge Pro with FPGA solution, benefits, key capabilities, and deployment scenarios.
-Azure Stack Edge Pro with FPGA is a Hardware-as-a-service solution. Microsoft ships you a cloud-managed device with a built-in Field Programmable Gate Array (FPGA) that enables accelerated AI-inferencing and has all the capabilities of a network storage gateway.
+Azure Stack Edge Pro with FPGA is a Hardware-as-a-service solution. Microsoft ships you a cloud-managed device with a built-in Field Programmable Gate Array (FPGA) that enables accelerated AI-inferencing and has all the capabilities of a network storage gateway.
+
+Azure Data Box Edge is rebranded as Azure Stack Edge.
## Use cases
Azure Stack Edge Pro has the following capabilities:
||| |Accelerated AI inferencing| Enabled by the built-in FPGA.| |Computing |Allows analysis, processing, filtering of data.|
-|High performance | High performance compute and data transfers.|
+|High performance | High-performance compute and data transfers.|
|Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.| |Cloud-managed |Device and service are managed via the Azure portal. | |Offline upload | Disconnected mode supports offline upload scenarios.|
Azure Stack Edge Pro has the following capabilities:
The Azure Stack Edge Pro solution comprises of Azure Stack Edge resource, Azure Stack Edge Pro physical device, and a local web UI.
-* **Azure Stack Edge Pro physical device** - A 1U rack-mounted server supplied by Microsoft that can be configured to send data to Azure.
+* **Azure Stack Edge Pro physical device**: A 1U rack-mounted server supplied by Microsoft that can be configured to send data to Azure.
-* **Azure Stack Edge resource** ΓÇô a resource in the Azure portal that lets you manage an Azure Stack Edge Pro device from a web interface that you can access from different geographical locations. Use the Azure Stack Edge resource to create and manage resources, view, and manage devices and alerts, and manage shares.
+* **Azure Stack Edge resource**: A resource in the Azure portal that lets you manage an Azure Stack Edge Pro device from a web interface that you can access from different geographic locations. Use the Azure Stack Edge resource to create and manage resources, manage shares, and view and manage devices and alerts.
+
+ <!--[The Azure Stack Edge service in Azure portal](media/data-box-overview/data-box-Edge-service1.png)-->
- <!--![The Azure Stack Edge service in Azure portal](media/data-box-overview/data-box-Edge-service1.png)-->
+ As Azure Stack Edge Pro approaches its end of life, no orders for new Azure Stack Edge Pro devices are being filled. If you're a new customer, we recommend that you explore using Azure Stack Edge Pro - GPU devices for your workloads. For more information, go to [What is Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-overview.md). For information about ordering an Azure Stack Edge Pro with GPU device, go to [Create a new resource for Azure Stack Edge Pro - GPU](azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal#create-a-new-resource).
- For more information, go to [Create an order for your Azure Stack Edge Pro device](azure-stack-edge-deploy-prep.md#create-a-new-resource).
+ If you're an existing customer, you can still create a new Azure Stack Edge Pro resource if you need to replace or reset your existing Azure Stack Edge Pro device. For instructions, go to [Create an order for your Azure Stack Edge Pro device](azure-stack-edge-deploy-prep.md#create-new-resource-for-existing-device).
* **Azure Stack Edge Pro local web UI** - Use the local web UI to run diagnostics, shut down and restart the Azure Stack Edge Pro device, view copy logs, and contact Microsoft Support to file a service request.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
To install:
| **appliance hostname:** | - | | **DNS:** | - | | **default gateway IP address:** | - |
- | **input interfaces:** | The system generates the list of input interfaces for you. To mirror the input interfaces, copy all the items presented in the list with a comma separator.You do not have to configure the bridge interface. This option is used for special use cases only. |
+ | **input interfaces:** | The system generates the list of input interfaces for you. To mirror the input interfaces, copy all the items presented in the list with a comma separator. You do not have to configure the bridge interface. This option is used for special use cases only. |
1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
To install:
1. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required for sign-in and administration.
- - **Support**: The administrative user for user management.
+ - **Support**: The administrative user for user management.
- - **CyberX**: The equivalent of root for accessing the appliance.
+ - **CyberX**: The equivalent of root for accessing the appliance.
1. The appliance restarts.
To install the software:
1. Sign-in credentials are automatically generated and presented. Keep these credentials in a safe place, because they're required for sign-in and administration.
- - **Support**: The administrative user for user management.
-
- - **CyberX**: The equivalent of root for accessing the appliance.
+ | Username | Description |
+ |--|--|
+ | Support | The administrative user for user management. |
+ | CyberX | The equivalent of root for accessing the appliance. |
1. The appliance restarts.
digital-twins Quickstart Adt Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-adt-explorer.md
Open a console window to the folder location **Azure_Digital_Twins__ADT__explore
> [!TIP] > If a `SignalRService.subscribe` error message appears when you connect, make sure that your Azure Digital Twins URL begins with *https://*.-
-> [!TIP]
-> If an authentication error appears, you may want to check your environment variables to make sure any credentials included there are valid for Azure Digital Twins. The DefaultAzureCredential attempts to authenticate against [Credential Types](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) in a specific order, and environment variables are evaluated first.
+>
+> If an authentication error appears, you may want to check your **environment variables** to make sure any credentials included there are valid for Azure Digital Twins. The `DefaultAzureCredential` attempts to authenticate against credential types in a [specific order](/dotnet/api/overview/azure/identity-readme#defaultazurecredential), and environment variables are evaluated first.
If you see a **Permissions requested** pop-up window from Microsoft, grant consent for this application and accept to continue.
dms Tutorial Oracle Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-oracle-azure-postgresql-online.md
+
Last updated 01/24/2020
# Tutorial: Migrate Oracle to Azure Database for PostgreSQL online using DMS (Preview) > [!IMPORTANT]
-> "Oracle to Azure Database for PostgreSQL" migration scenario (currently in preview) will no longer be available after May 1, 2021. We will continue to provide support via alternative tooling (such as Ora2pg) and provide the best migration experience for Oracle to PostgreSQL migrations. For migration best practices, see [Oracle to Azure Database for PostgreSQL migration guide] (https://aka.ms/OracletoPGguide).
+> **Oracle to Azure Database for PostgreSQL** migration scenario (currently in preview) will no longer be available after May 1, 2021. We will continue to provide support via alternative tooling (such as Ora2pg) and provide the best migration experience for Oracle to PostgreSQL migrations. For migration best practices, see [Oracle to Azure Database for PostgreSQL migration guide](https://aka.ms/OracletoPGguide).
You can use Azure Database Migration Service to migrate the databases from Oracle databases hosted on-premises or on virtual machines to [Azure Database for PostgreSQL](../postgresql/index.yml) with minimal downtime. In other words, you can complete the migration with minimal downtime to the application. In this tutorial, you migrate the **HR** sample database from an on-premises or virtual machine instance of Oracle 11g to Azure Database for PostgreSQL by using the online migration activity in Azure Database Migration Service.
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-availability-and-consistency.md
Title: Availability and consistency - Azure Event Hubs | Microsoft Docs description: How to provide the maximum amount of availability and consistency with Azure Event Hubs using partitions. Previously updated : 01/25/2021 Last updated : 03/15/2021
If an Event Hubs namespace has been created with [availability zones](../availab
When a client application sends events to an event hub without specifying a partition, events are automatically distributed among partitions in your event hub. If a partition isn't available for some reason, events are distributed among the remaining partitions. This behavior allows for the greatest amount of up time. For use cases that require the maximum up time, this model is preferred instead of sending events to a specific partition.
-### Availability considerations when using a partition ID or key
-Using a partition ID or partition key is optional. Consider carefully whether or not to use one. If you don't specify a partition ID/key when publishing an event, Event Hubs balances the load among partitions. When you use a partition ID/key, these partitions require availability on a single node, and outages can occur over time. For example, compute nodes may need to be rebooted or patched. So, if you set a partition ID/key and that partition becomes unavailable for some reason, an attempt to access the data in that partition will fail. If high availability is most important, don't specify a partition ID/key. In that case, events are sent to partitions using an internal load-balancing algorithm. In this scenario, you are making an explicit choice between availability (no partition ID/key) and consistency (pinning events to a specific partition). Using partition ID/key downgrades the availability of an event hub to partition-level.
-
-### Availability considerations when handling delays in processing events
-Another consideration is about a consumer application handling delays in processing events. In some cases, it might be better for the consumer application to drop data and retry rather than trying to keep up with processing, which can potentially cause further downstream processing delays. For example, with a stock ticker it's better to wait for complete up-to-date data, but in a live chat or VOIP scenario you'd rather have the data quickly, even if it isn't complete.
-
-Given these availability considerations, in these scenarios, the consumer application might choose one of the following error handling strategies:
--- Stop (stop reading from the event hub until issues are fixed)-- Drop (messages arenΓÇÖt important, drop them)-- Retry (retry the messages as you see fit)-- ## Consistency In some scenarios, the ordering of events can be important. For example, you may want your back-end system to process an update command before a delete command. In this scenario, a client application sends events to a specific partition so that the ordering is preserved. When a consumer application consumes these events from the partition, they are read in order. With this configuration, keep in mind that if the particular partition to which you are sending is unavailable, you will receive an error response. As a point of comparison, if you don't have an affinity to a single partition, the Event Hubs service sends your event to the next available partition.
+Therefore, if high availability is most important, don't target a specific partition (using partition ID/key). Using partition ID/key downgrades the availability of an event hub to partition-level. In this scenario, you are making an explicit choice between availability (no partition ID/key) and consistency (pinning events to a specific partition). For detailed information about partitions in Event Hubs, see [Partitions](event-hubs-features.md#partitions).
## Appendix
+### Send events without specifying a partition
+We recommend sending events to an event hub without setting partition information to allow the Event Hubs service to balance the load across partitions. See the following quick starts to learn how to do so in different programming languages.
+
+- [Send events using .NET](event-hubs-dotnet-standard-getstarted-send.md)
+- [Send events using Java](event-hubs-java-get-started-send.md)
+- [Send events using JavaScript](event-hubs-python-get-started-send.md)
+- [Send events using Python](event-hubs-python-get-started-send.md)
++ ### Send events to a specific partition
-This section shows you how to send events to a specific partition using C#, Java, Python, and JavaScript.
+In this section, you learn how to send events to a specific partition using different programming languages.
### [.NET](#tab/dotnet)
-For the full sample code that shows you how to send an event batch to an event hub (without setting partition ID/key), see [Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)](event-hubs-dotnet-standard-getstarted-send.md).
- To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](//dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key. ```csharp
var sendEventOptions = new SendEventOptions { PartitionKey = "cities" };
producer.SendAsync(events, sendOptions) ```
-### [Java](#tab/java)
-For the full sample code that shows you how to send an event batch to an event hub (without setting partition ID/key), see [Use Java to send events to or receive events from Azure Event Hubs (azure-messaging-eventhubs)](event-hubs-java-get-started-send.md).
+### [Java](#tab/java)
To send events to a specific partition, create the batch using the [createBatch](/java/api/com.azure.messaging.eventhubs.eventhubproducerclient.createbatch) method by specifying either **partition ID** or **partition key** in [createBatchOptions](/java/api/com.azure.messaging.eventhubs.models.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key. ```java
sendOptions.setPartitionKey("cities");
producer.send(events, sendOptions); ```
-### [Python](#tab/python)
-For the full sample code that shows you how to send an event batch to an event hub (without setting partition ID/key), see [Send events to or receive events from event hubs by using Python (azure-eventhub)](event-hubs-python-get-started-send.md).
+### [Python](#tab/python)
To send events to a specific partition, when creating a batch using the [`EventHubProducerClient.create_batch`](/python/api/azure-eventhub/azure.eventhub.eventhubproducerclient#create-batchkwargs-) method, specify the `partition_id` or the `partition_key`. Then, use the [`EventHubProducerClient.send_batch`](/python/api/azure-eventhub/azure.eventhub.aio.eventhubproducerclient#send-batch-event-data-batch--typing-union-azure-eventhub--common-eventdatabatch--typing-list-azure-eventhub-) method to send the batch to the event hub's partition. ```python
You can also use the [EventHubProducerClient.send_batch](/python/api/azure-event
producer.send_batch(event_data_batch, partition_key="cities") ``` - ### [JavaScript](#tab/javascript)
-For the full sample code that shows you how to send an event batch to an event hub (without setting partition ID/key), see [Send events to or receive events from event hubs by using JavaScript (azure/event-hubs)](event-hubs-node-get-started-send.md).
- To send events to a specific partition, [Create a batch](/javascript/api/@azure/event-hubs/eventhubproducerclient#createBatch_CreateBatchOptions_) using the [EventHubProducerClient.CreateBatchOptions](/javascript/api/@azure/event-hubs/eventhubproducerclient#createBatch_CreateBatchOptions_) object by specifying the `partitionId` or the `partitionKey`. Then, send the batch to the event hub using the [EventHubProducerClient.SendBatch](/javascript/api/@azure/event-hubs/eventhubproducerclient#sendBatch_EventDataBatch__OperationOptions_) method. See the following example.
producer.sendBatch(events, sendBatchOptions);
+ ## Next steps You can learn more about Event Hubs by visiting the following links:
-* [Event Hubs service overview](./event-hubs-about.md)
-* [Create an event hub](event-hubs-create.md)
+- [Event Hubs service overview](./event-hubs-about.md)
+- [Event Hubs terminology](event-hubs-features.md)
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
Title: Overview of features - Azure Event Hubs | Microsoft Docs description: This article provides details about features and terminology of Azure Event Hubs. Previously updated : 02/19/2021 Last updated : 03/15/2021 # Features and terminology in Azure Event Hubs
Published events are removed from an Event Hub based on a configurable, timed-ba
- For Event Hubs **Dedicated**, the maximum retention period is **90 days**. - If you change the retention period, it applies to all messages including messages that are already in the event hub.
+Event Hubs retains events for a configured retention time that applies across
+all partitions. Events are automatically removed when the retention period has
+been reached. If you specify a retention period of one day, the event will
+become unavailable exactly 24 hours after it has been accepted. You cannot
+explicitly delete events.
+
+If you need to archive events beyond the allowed
+retention period, you can have them [automatically stored in Azure Storage or
+Azure Data Lake by turning on the Event Hubs Capture
+feature](event-hubs-capture-overview.md), and if you need
+to search or analyze such deep archives, you can [easily import them into Azure
+Synapse](store-captured-data-data-warehouse.md) or other
+similar stores and analytics platforms.
+
+The reason for Event Hubs' limit on data retention based on time is to prevent
+large volumes of historic customer data getting trapped in a deep store that is
+only indexed by a timestamp and only allows for sequential access. The
+architectural philosophy here is that historic data needs richer indexing and
+more direct access than the real-time eventing interface that Event Hubs or
+Kafka provide. Event stream engines are not well suited to play the role of data
+lakes or long-term archives for event sourcing.
+
+ > [!NOTE] > Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a > permanent store for infinitely held event streams.
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-scalability.md
Title: Scalability - Azure Event Hubs | Microsoft Docs description: This article provides information on how to scale Azure Event Hubs by using partitions and throughput units. Previously updated : 06/23/2020 Last updated : 03/16/2021 # Scaling with Event Hubs
For more information about the auto-inflate feature, see [Automatically scale th
## Partitions [!INCLUDE [event-hubs-partitions](../../includes/event-hubs-partitions.md)]
-### Partition key
-You can use a [partition key](event-hubs-programming-guide.md#partition-key) to map incoming event data into specific partitions for the purpose of data organization. The partition key is a sender-supplied value passed into an event hub. It is processed through a static hashing function, which creates the partition assignment. If you don't specify a partition key when publishing an event, a round-robin assignment is used.
-
-The event publisher is only aware of its partition key, not the partition to which the events are published. This decoupling of key and partition insulates the sender from needing to know too much about the downstream processing. A per-device or user unique identity makes a good partition key, but other attributes such as geography can also be used to group related events into a single partition.
## Next steps
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
The ExpressRoute gateway will advertise the *Address Space(s)* of the Azure VNet
### How many prefixes can be advertised from a VNet to on-premises on ExpressRoute Private Peering?
-There is a maximum of 1000 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 199 address spaces on a single VNet connected to an ExpressRoute circuit, all 199 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 150 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 151 prefixes to on-premises.
+There is a maximum of 1000 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 999 address spaces on a single VNet connected to an ExpressRoute circuit, all 999 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 500 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 501 prefixes to on-premises.
### What happens if I exceed the prefix limit on an ExpressRoute connection?
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-coexist-resource-manager.md
You can follow the steps below to add Point-to-Site configuration to your VPN ga
$azureVpn = Get-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName Set-AzVirtualNetworkGateway -VirtualNetworkGateway $azureVpn -VpnClientAddressPool "10.251.251.0/24" ```
-2. Upload the VPN [root certificate](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md#6-generate-certificates) to Azure for your VPN gateway. In this example, it's assumed that the root certificate is stored in the local machine where the following PowerShell cmdlets are run and that you are running PowerShell locally. You can also upload the certificate using the Azure portal.
+2. Upload the VPN [root certificate](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md#Certificates) to Azure for your VPN gateway. In this example, it's assumed that the root certificate is stored in the local machine where the following PowerShell cmdlets are run and that you are running PowerShell locally. You can also upload the certificate using the Azure portal.
```powershell $p2sCertFullName = "RootErVpnCoexP2S.cer"
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Equinix | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport, Swisscom | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
-| **Hong Kong2** | [MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, Megaport, PCCW Global Limited, SingTel |
+| **Hong Kong2** | [MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
| **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, Orange, Teraco | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[GlobalConnect]()** | Supported |Supported | Oslo, Stavanger | | **GTT** |Supported |Supported |London2 | | **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai |
+| **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 |
| **Intelsat** | Supported | Supported | Washington DC2 | | **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Hong Kong, London, New York, Paris, Silicon Valley, Singapore, Washington DC, Zurich | | **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC |
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/overview.md
Previously updated : 01/12/2021 Last updated : 03/16/2021
Azure Firewall Manager has the following known issues:
|Bulk IP address addition fails|The secure hub firewall goes into a failed state if you add multiple public IP addresses.|Add smaller public IP address increments. For example, add 10 at a time.| |DDoS Protection Standard not supported with secured virtual hubs|DDoS Protection Standard is not integrated with vWANs.|Investigating| |Activity logs not fully supported|Firewall policy does not currently support Activity logs.|Investigating|
-|Configuring SNAT private IP address ranges|[Private IP range settings](../firewall/snat-private-range.md) are ignored if Azure Firewall policy is configured. The default Azure Firewall behavior is used, where it doesnΓÇÖt SNAT Network rules when the destination IP address is in a private IP address range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918).|Investigating|
|Some firewall settings are not migrated when the firewall is migrated to use Firewall Policy|Availability Zones and SNAT private addresses are not migrated when you migrate to Azure Firewall Policy.|Investigating| ## Next steps
frontdoor Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/resource-manager-template-samples.md
Previously updated : 03/05/2021 Last updated : 03/16/2021 # Azure Resource Manager templates for Azure Front Door
The following table includes links to Azure Resource Manager templates for Azure
|**Storage**| **Description** | | [Storage static website](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-storage-static-website) | Creates an Azure Storage account and static website with a public endpoint, and a Front Door profile. | | [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
+|**Application Gateway**| **Description** |
+| [Application Gateway](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-application-gateway-public) | Creates an Application Gateway, and a Front Door profile. |
+|**Virtual machine**| **Description** |
+| [Virtual machine with Private Link service](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-vm-private-link) | Creates a virtual machine and Private Link service, and a Front Door profile. |
| | |
hdinsight Interactive Query Tutorial Analyze Flight Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/interactive-query-tutorial-analyze-flight-data.md
This tutorial covers the following tasks:
## Download the flight data
-1. Browse to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time).
+1. Browse to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?gnoyr_VQ=FGJ).
2. On the page, clear all fields, and then select the following values:
To delete a cluster, see [Delete an HDInsight cluster using your browser, PowerS
In this tutorial, you took a raw CSV data file, imported it into an HDInsight cluster storage, and then transformed the data using Interactive Query in Azure HDInsight. Advance to the next tutorial to learn about the Apache Hive Warehouse Connector. > [!div class="nextstepaction"]
-> [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](./apache-hive-warehouse-connector.md)
+> [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](./apache-hive-warehouse-connector.md)
hpc-cache Cache Usage Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/cache-usage-models.md
description: Describes the different cache usage models and how to choose among
Previously updated : 03/08/2021 Last updated : 03/15/2021
File caching is how Azure HPC Cache expedites client requests. It uses these bas
If write caching is disabled, the cache doesn't store the changed file and immediately writes it to the back-end storage system.
-* **Write-back delay** - For a cache with write caching turned on, write-back delay is the amount of time the cache waits for additional file changes before moving the file to the back-end storage system.
+* **Write-back delay** - For a cache with write caching turned on, write-back delay is the amount of time the cache waits for additional file changes before copying the file to the back-end storage system.
* **Back-end verification** - The back-end verification setting determines how frequently the cache compares its local copy of a file with the remote version on the back-end storage system. If the back-end copy is newer than the cached copy, the cache fetches the remote copy and stores it for future requests.
You must choose a usage model for each NFS-mounted storage target that you use.
HPC cache usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
-There are several options:
+These are the usage model options:
* **Read heavy, infrequent writes** - Use this option if you want to speed up read access to files that are static or rarely changed.
There are several options:
Do not use this option if there is a risk that a file might be modified directly on the storage system without first writing it to the cache. If that happens, the cached version of the file will be out of sync with the back-end file.
-* **Greater than 15% writes** - This option speeds up both read and write performance. When using this option, all clients must access files through the Azure HPC Cache instead of mounting the back-end storage directly. The cached files will have recent changes that are not stored on the back end.
+* **Greater than 15% writes** - This option speeds up both read and write performance. When using this option, all clients must access files through the Azure HPC Cache instead of mounting the back-end storage directly. The cached files will have recent changes that have not yet been copied to the back end.
In this usage model, files in the cache are only checked against the files on back-end storage every eight hours. The cached version of the file is assumed to be more current. A modified file in the cache is written to the back-end storage system after it has been in the cache for 20 minutes<!-- an hour --> with no additional changes. * **Clients write to the NFS target, bypassing the cache** - Choose this option if any clients in your workflow write data directly to the storage system without first writing to the cache, or if you want to optimize data consistency. Files that clients request are cached (reads), but any changes to those files from the client (writes) are not cached. They are passed through directly to the back-end storage system.
- With this usage model, the files in the cache are frequently checked against the back-end versions for updates. This verification allows files to be changed outside of the cache while maintaining data consistency.
+ With this usage model, the files in the cache are frequently checked against the back-end versions for updates - every 30 seconds. This verification allows files to be changed outside of the cache while maintaining data consistency.
+
+ > [!TIP]
+ > Those first three basic usage models can be used to handle the majority of Azure HPC Cache workflows. The next options are for less common scenarios.
* **Greater than 15% writes, checking the backing server for changes every 30 seconds** and **Greater than 15% writes, checking the backing server for changes every 60 seconds** - These options are designed for workflows where you want to speed up both reads and writes, but there's a chance that another user will write directly to the back-end storage system. For example, if multiple sets of clients are working on the same files from different locations, these usage models might make sense to balance the need for quick file access with low tolerance for stale content from the source.
There are several options:
This table summarizes the usage model differences:
-| Usage model | Caching mode | Back-end verification | Maximum write-back delay |
+
+<!-- | Usage model | Caching mode | Back-end verification | Maximum write-back delay |
|-|--|--|--| | Read heavy, infrequent writes | Read | Never | None | | Greater than 15% writes | Read/write | 8 hours | 20 minutes |
This table summarizes the usage model differences:
| Greater than 15% writes, frequent back-end checking (60 seconds) | Read/write | 60 seconds | 20 minutes | | Greater than 15% writes, frequent write-back | Read/write | 30 seconds | 30 seconds | | Read heavy, checking the backing server every 3 hours | Read | 3 hours | None |-
+-->
If you have questions about the best usage model for your Azure HPC Cache workflow, talk to your Azure representative or open a support request for help. ## Next steps
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/configuration.md
description: Explains how to configure additional settings for the cache like MT
Previously updated : 03/11/2021 Last updated : 03/15/2021
Check that your DNS configuration can successfully resolve these items before us
* Certificate revocation list (CRL) download and online certificate status protocol (OCSP) verification services. A partial list is provided in the [firewall rules item](../security/fundamentals/tls-certificate-changes.md#will-this-change-affect-me) at the end of this [Azure TLS article](../security/fundamentals/tls-certificate-changes.md), but you should consult a Microsoft technical representative to understand all of the requirements. * The fully qualified domain name of your NTP server (time.microsoft.com or a custom server)
-If you need to set a custom DNS server for your cache, fill in the provided fields:
+If you need to set a custom DNS server for your cache, use the provided fields:
-* **DNS search domain** - Enter your search domain, for example, ``contoso.com``. A single value is allowed.
+* **DNS search domain** (optional) - Enter your search domain, for example, ``contoso.com``. A single value is allowed, or you can leave it blank.
* **DNS server(s)** - Enter up to three DNS servers. Specify them by IP address. <!--
hpc-cache Directory Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/directory-services.md
description: How to configure directory services for client access to storage ta
Previously updated : 12/22/2020 Last updated : 03/15/2021
In the **Secure access** section, you can enable encryption and certificate vali
* **Auto-download certificate** - Choose **Yes** if you want to try to download a certificate as soon as you submit these settings.
-Fill in the **Credentials** section if you want to use static credentials for LDAP security.
+Fill in the **Credentials** section if you want to use static credentials for LDAP security. This information is encrypted when stored, and can't be queried.
* **Bind DN** - Enter the bind distinguished name to use to authenticate to the LDAP server. (Use DN format.) * **Bind password** - Provide the password for the bind DN.
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
description: How to define storage targets so that your Azure HPC Cache can use
Previously updated : 03/11/2021 Last updated : 03/15/2021
For details about the other options, read [Understand usage models](cache-usage-
This table summarizes the differences among all of the usage models:
-| Usage model | Caching mode | Back-end verification | Maximum write-back delay |
+
+<!-- | Usage model | Caching mode | Back-end verification | Maximum write-back delay |
|--|--|--|--| | Read heavy, infrequent writes | Read | Never | None | | Greater than 15% writes | Read/write | 8 hours | 20 minutes |
This table summarizes the differences among all of the usage models:
| Greater than 15% writes, frequent back-end checking (30 seconds) | Read/write | 30 seconds | 20 minutes | | Greater than 15% writes, frequent back-end checking (60 seconds) | Read/write | 60 seconds | 20 minutes | | Greater than 15% writes, frequent write-back | Read/write | 30 seconds | 30 seconds |
-| Read heavy, checking the backing server every 3 hours | Read | 3 hours | None |
+| Read heavy, checking the backing server every 3 hours | Read | 3 hours | None | -->
> [!NOTE] > The **Back-end verification** value shows when the cache automatically compares its files with source files in remote storage. However, you can trigger a comparison by sending a client request that includes a readdirplus operation on the back-end storage system. Readdirplus is a standard NFS API (also called extended read) that returns directory metadata, which causes the cache to compare and update files.
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-prerequisites.md
description: Prerequisites for using Azure HPC Cache
Previously updated : 03/11/2021 Last updated : 03/15/2021
The cache needs DNS to access resources outside of its virtual network. Dependin
* To access Azure Blob storage endpoints and other internal resources, you need the Azure-based DNS server. * To access on-premises storage, you need to configure a custom DNS server that can resolve your storage hostnames. You must do this **before** you create the cache.
-If you only need access to Blob storage, you can use the default Azure-provided DNS server for your cache. However, if you need access to other resources, you should create a custom DNS server and configure it to forward any Azure-specific resolution requests to the Azure DNS server.
+If you only use Blob storage, you can use the default Azure-provided DNS server for your cache. However, if you need access to storage or other resources outside of Azure, you should create a custom DNS server and configure it to forward any Azure-specific resolution requests to the Azure DNS server.
To use a custom DNS server, you need to do these setup steps before you create your cache:
Azure HPC Cache also can use a blob container mounted with the NFS protocol as a
The storage account requirements are different for an ADLS-NFS blob storage target and for a standard blob storage target. Follow the instructions in [Mount Blob storage by using the Network File System (NFS) 3.0 protocol](../storage/blobs/network-file-system-protocol-support-how-to.md) carefully to create and configure the NFS-enabled storage account.
-This is a general overview of the steps:
+This is a general overview of the steps. These steps might change, so always refer to the [ADLS-NFS instructions](../storage/blobs/network-file-system-protocol-support-how-to.md) for current details.
1. Make sure that the features you need are available in the regions where you plan to work. 1. Enable the NFS protocol feature for your subscription. Do this *before* you create the storage account.
-1. Create a secure virtual network (VNet) for the storage account. You should use the same virtual network for your NFS-enabled storage account and for your Azure HPC Cache.
+1. Create a secure virtual network (VNet) for the storage account. You should use the same virtual network for your NFS-enabled storage account and for your Azure HPC Cache. (Do not use the same subnet as the cache.)
1. Create the storage account.
import-export Storage Import Export Data To Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/import-export/storage-import-export-data-to-blobs.md
Previously updated : 03/03/2021 Last updated : 03/15/2021
Perform the following steps to prepare the drives.
* If you have added data to a drive that was encrypted by WAImportExport tool, use the following command to unlock the drive:
- `WAImportExport Unlock /externalKey:<BitLocker key (base 64 string) copied from journal (*.jrn*) file>`
+ `WAImportExport Unlock /bk:<BitLocker key (base 64 string) copied from journal (*.jrn*) file>`
5. Open a PowerShell or command-line window with administrative privileges. To change directory to the unzipped folder, run the following command:
key-vault Authentication Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/authentication-fundamentals.md
Azure Key Vault allows you to securely store and manage application credentials such as secrets, keys, and certificates in a central and secure cloud repository. Key Vault eliminates the need to store credentials in your applications. Your applications can authenticate to Key Vault at run time to retrieve credentials.
-As an administrator, you can tightly control which users and applications can access your key vault and you can limit and audit the operations they perform. This document explains the fundamental concepts of the key vault access model. It will and provide you with an introductory level of knowledge and show you how you can authenticate a user or application to key vault from start to finish.
+As an administrator, you can tightly control which users and applications can access your key vault and you can limit and audit the operations they perform. This document explains the fundamental concepts of the key vault access model. It will provide you with an introductory level of knowledge and show you how you can authenticate a user or application to key vault from start to finish.
## Required Knowledge
logic-apps Logic Apps Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-gateway-install.md
description: Before you can access data on premises from Azure Logic Apps, downl
ms.suite: integration - Previously updated : 05/15/2020+ Last updated : 03/16/2021+
+#Customer intent: As a software developer, I want to install and set up the on-premises data gateway so that I can create logic app workflows that can access data in on-premises systems.
# Install on-premises data gateway for Azure Logic Apps
-Before you can [connect to on-premises data sources from Azure Logic Apps](../logic-apps/logic-apps-gateway-connection.md), download and install the [on-premises data gateway](https://aka.ms/on-premises-data-gateway-installer) on a local computer. The gateway works as a bridge that provides quick data transfer and encryption between data sources on premises and your logic apps. You can use the same gateway installation with other cloud services, such as Power BI, Power Automate, Power Apps, and Azure Analysis Services. For information about how to use the gateway with these services, see these articles:
+Before you can [connect to on-premises data sources from Azure Logic Apps](../logic-apps/logic-apps-gateway-connection.md), download and install the [on-premises data gateway](https://aka.ms/on-premises-data-gateway-installer) on a local computer. The gateway works as a bridge that provides quick data transfer and encryption between data sources on premises and your logic apps. You can use the same gateway installation with other cloud services, such as Power Automate, Power BI, Power Apps, and Azure Analysis Services. For information about how to use the gateway with these services, see these articles:
* [Microsoft Power Automate on-premises data gateway](/power-automate/gateway-reference) * [Microsoft Power BI on-premises data gateway](/power-bi/service-gateway-onprem)
This article shows how to download, install, and set up your on-premises data ga
* If you plan to use Windows authentication, make sure that you install the gateway on a computer that's a member of the same Active Directory environment as your data sources.
- * The region that you select for your gateway installation is the same location that you must select when you later create the Azure gateway resource for your logic app. By default, this region is the same location as your Azure AD tenant that manages your Azure account. However, you can change the location during gateway installation.
+ * The region that you select for your gateway installation is the same location that you must select when you later create the Azure gateway resource for your logic app. By default, this region is the same location as your Azure AD tenant that manages your Azure user account. However, you can change the location during gateway installation or later.
+
+ > [!IMPORTANT]
+ > During gateway setup, the **Change Region** command is unavailable if you signed in with your Azure Government account, which is associated with an
+ > Azure Active Directory (Azure AD) tenant in the [Azure Government cloud](../azure-government/compare-azure-government-global-azure.md). The gateway
+ > automatically uses the same region as your user account's Azure AD tenant.
+ >
+ > To continue using your Azure Government account, but set up the gateway to work in the global multi-tenant Azure Commercial cloud instead, first sign
+ > in during gateway installation with the `prod@microsoft.com` username. This solution forces the gateway to use the global multi-tenant Azure cloud,
+ > but still lets you continue using your Azure Government account.
* If you're updating your gateway installation, uninstall your current gateway first for a cleaner experience.
machine-learning Migrate Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-execute-r-script.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Execute R Script'
+description: Rebuild Studio (classic) Execute R script modules to run on Azure Machine Learning.
+++++++ Last updated : 03/08/2021+++
+# Migrate Execute R Script modules in Studio (classic)
+
+In this article, you learn how to rebuild a Studio (classic) **Execute R Script** module in Azure Machine Learning.
+
+For more information on migrating from Studio (classic), see the [migration overview article](migrate-overview.md).
+
+## Execute R Script
+
+Azure Machine Learning designer now runs on Linux. Studio (classic) runs on Windows. Due to the platform change, you must adjust your **Execute R Script** during migration, otherwise the pipeline will fail.
+
+To migrate an **Execute R Script** module from Studio (classic), you must replace the `maml.mapInputPort` and `maml.mapOutputPort`interfaces with standard functions.
+
+The following table summarizes the changes to the R Script module:
+
+|Feature|Studio (classic)|Azure Machine Learning designer|
+||||
+|Script Interface|`maml.mapInputPort` and `maml.mapOutputPort`|Function interface|
+|Platform|Windows|Linux|
+|Internet Accessible |No|Yes|
+|Memory|14 GB|Dependent on Compute SKU|
+
+### How to update the R script interface
+
+Here are the contents of a sample **Execute R Script** module in Studio (classic):
+```r
+# Map 1-based optional input ports to variables
+dataset1 <- maml.mapInputPort(1) # class: data.frame
+dataset2 <- maml.mapInputPort(2) # class: data.frame
+
+# Contents of optional Zip port are in ./src/
+# source("src/yourfile.R");
+# load("src/yourData.rdata");
+
+# Sample operation
+data.set = rbind(dataset1, dataset2);
+
+
+# You'll see this output in the R Device port.
+# It'll have your stdout, stderr and PNG graphics device(s).
+
+plot(data.set);
+
+# Select data.frame to be sent to the output Dataset port
+maml.mapOutputPort("data.set");
+```
+
+Here are the updated contents in the designer. Notice that the `maml.mapInputPort` and `maml.mapOutputPort` have been replaced with the standard function interface `azureml_main`.
+```r
+azureml_main <- function(dataframe1, dataframe2){
+    # Use the parameters dataframe1 and dataframe2 directly
+    dataset1 <- dataframe1
+    dataset2 <- dataframe2
+
+    # Contents of optional Zip port are in ./src/
+    # source("src/yourfile.R");
+    # load("src/yourData.rdata");
+
+    # Sample operation
+    data.set = rbind(dataset1, dataset2);
++
+    # You'll see this output in the R Device port.
+    # It'll have your stdout, stderr and PNG graphics device(s).
+    plot(data.set);
+
+  # Return datasets as a Named List
+
+  return(list(dataset1=data.set))
+}
+```
+For more information, see the designer [Execute R Script module reference](../algorithm-module-reference/execute-r-script.md).
+
+### Install R packages from the internet
+
+Azure Machine Learning designer lets you install packages directly from CRAN.
+
+This is an improvement over Studio (classic). Since Studio (classic) runs in a sandbox environment with no internet access, you had to upload scripts in a zip bundle to install more packages.
+
+Use the following code to install CRAN packages in the designer's **Execute R Script** module:
+```r
+  if(!require(zoo)) {
+      install.packages("zoo",repos = "http://cran.us.r-project.org")
+  }
+  library(zoo)
+```
+
+## Next steps
+
+In this article, you learned how to migrate Execute R Script modules to Azure Machine Learning.
+
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. **Migrate Execute R Script modules**.
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-overview.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning'
+description: Migrate from Studio (classic) to Azure Machine Learning for a modernized data science platform.
+++++++ Last updated : 03/08/2021++
+# Migrate to Azure Machine Learning
+
+In this article, you learn how to migrate Studio (classic) assets to Azure Machine Learning. At this time, to migrate resources, you must manually rebuild your experiments.
+
+Azure Machine Learning provides a modernized data science platform that combines no-code and code-first approaches. To learn more about the differences between Studio (classic) and Azure Machine Learning, see the [Assess Azure Machine Learning](#step-1-assess-azure-machine-learning) section.
++
+## Recommended approach
+
+To migrate to Azure Machine Learning, we recommend the following approach:
+
+> [!div class="checklist"]
+> * Step 1: Assess Azure Machine Learning
+> * Step 2: Create a migration plan
+> * Step 3: Rebuild experiments and web services
+> * Step 4: Integrate client apps
+> * Step 5: Clean up Studio (classic) assets
++
+## Step 1: Assess Azure Machine Learning
+1. Learn about [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/); it's benefits, costs, and architecture.
+
+1. Compare the capabilities of Azure Machine Learning and Studio (classic).
+
+ >[!NOTE]
+ > The **designer** feature in Azure Machine Learning provides a similar drag-and-drop experience to Studio (classic). However, Azure Machine Learning also provides robust [code-first workflows](../concept-model-management-and-deployment.md) as an alternative. This migration series focuses on the designer, since it's most similar to the Studio (classic) experience.
+
+ [!INCLUDE [aml-compare-classic](../../../includes/machine-learning-compare-classic-aml.md)]
+
+3. Verify that your critical Studio (classic) modules are supported in Azure Machine Learning designer. For more information, see the [Studio (classic) and designer module-mapping](#studio-classic-and-designer-module-mapping) table below.
+
+4. [Create an Azure Machine Learning workspace](https://docs.microsoft.com/azure/machine-learning/how-to-manage-workspace?tabs=azure-portal).
+
+## Step 2: Create a migration plan
+
+1. Identify the Studio (classic) **data sets**, **models**, and **web services** that you want to migrate.
+
+1. Determine the impact that a migration will have on your business.
+
+1. Create a migration plan.
+
+## Step 3: Rebuild experiments and web services
+
+1. [Migrate datasets to Azure Machine Learning](migrate-register-dataset.md).
+1. Use the designer to [rebuild experiments](migrate-rebuild-experiment.md).
+1. Use the designer to [redeploy web services](migrate-rebuild-web-service.md).
+
+ >[!NOTE]
+ > Azure Machine Learning also supports code-first workflows for [datasets](../how-to-create-register-datasets.md), [training](../how-to-set-up-training-targets.md), and [deployment](../how-to-deploy-and-where.md).
+
+## Step 4: Integrate client apps
+
+1. Modify client applications that invoke Studio (classic) web services to use your new [Azure Machine Learning endpoints](migrate-rebuild-integrate-with-client-app.md).
+
+## Step 5: Cleanup Studio (classic) assets
+
+1. [Clean up Studio (classic) assets](export-delete-personal-data-dsr.md) to avoid extra charges. You may want to retain assets for fallback until you have validated Azure Machine Learning workloads.
+
+## Studio (classic) and designer module-mapping
+
+Consult the following table to see you which modules to use while rebuilding Studio (classic) experiments in the designer.
++
+> [!IMPORTANT]
+> The designer implements modules through open-source Python packages rather than C# packages like Studio (classic). Because of this difference, the output of designer modules may vary slightly from their Studio (classic) counterparts.
++
+|Category|Studio (classic) module|Replacement designer module|
+|--|-|--|
+|Data input and output|- Enter Data Manually </br> - Export Data </br> - Import Data </br> - Load Trained Model </br> - Unpack Zipped Datasets|- Enter Data Manually </br> - Export Data </br> - Import Data|
+|Data Format Conversions|- Convert to CSV </br> - Convert to Dataset </br> - Convert to ARFF </br> - Convert to SVMLight </br> - Convert to TSV|- Convert to CSV </br> - Convert to Dataset|
+|Data Transformation - Manipulation|- Add Columns</br> - Add Rows </br> - Apply SQL Transformation </br> - Cleaning Missing Data </br> - Convert to Indicator Values </br> - Edit Metadata </br> - Join Data </br> - Remove Duplicate Rows </br> - Select Columns in Dataset </br> - Select Columns Transform </br> - SMOTE </br> - Group Categorical Values|- Add Columns</br> - Add Rows </br> - Apply SQL Transformation </br> - Cleaning Missing Data </br> - Convert to Indicator Values </br> - Edit Metadata </br> - Join Data </br> - Remove Duplicate Rows </br> - Select Columns in Dataset </br> - Select Columns Transform </br> - SMOTE|
+|Data Transformation ΓÇô Scale and Reduce |- Clip Values </br> - Group Data into Bins </br> - Normalize Data </br>- Principal Component Analysis |- Clip Values </br> - Group Data into Bins </br> - Normalize Data|
+|Data Transformation ΓÇô Sample and Split|- Partition and Sample </br> - Split Data|- Partition and Sample </br> - Split Data|
+|Data Transformation ΓÇô Filter |- Apply Filter </br> - FIR Filter </br> - IIR Filter </br> - Median Filter </br> - Moving Average Filter </br> - Threshold Filter </br> - User Defined Filter||
+|Data Transformation ΓÇô Learning with Counts |- Build Counting Transform </br> - Export Count Table </br> - Import Count Table </br> - Merge Count Transform</br> - Modify Count Table Parameters||
+|Feature Selection |- Filter Based Feature Selection </br> - Fisher Linear Discriminant Analysis </br> - Permutation Feature Importance |- Filter Based Feature Selection </br> - Permutation Feature Importance|
+| Model - Classification| - Multiclass Decision Forest </br> - Multiclass Decision Jungle </br> - Multiclass Logistic Regression </br>- Multiclass Neural Network </br>- One-vs-All Multiclass </br>- Two-Class Averaged Perceptron </br>- Two-Class Bayes Point Machine </br>- Two-Class Boosted Decision Tree </br> - Two-Class Decision Forest </br> - Two-Class Decision Jungle </br> - Two-Class Locally-Deep SVM </br> - Two-Class Logistic Regression </br> - Two-Class Neural Network </br> - Two-Class Support Vector Machine | - Multiclass Decision Forest </br> - Multiclass Boost Decision Tree </br> - Multiclass Logistic Regression </br> - Multiclass Neural Network </br> - One-vs-All Multiclass </br> - Two-Class Averaged Perceptron </br> - Two-Class Boosted Decision Tree </br> - Two-Class Decision Forest </br>- Two-Class Logistic Regression </br> - Two-Class Neural Network </br>- Two-Class Support Vector Machine |
+| Model - Clustering| - K-means clustering| - K-means clustering|
+| Model - Regression| - Bayesian Linear Regression </br> - Boosted Decision Tree Regression </br>- Decision Forest Regression </br> - Fast Forest Quantile Regression </br> - Linear Regression </br> - Neural Network Regression </br> - Ordinal Regression Poisson Regression| - Boosted Decision Tree Regression </br>- Decision Forest Regression </br> - Fast Forest Quantile Regression </br> - Linear Regression </br> - Neural Network Regression </br> - Poisson Regression|
+| Model ΓÇô Anomaly Detection| - One-Class SVM </br> - PCA-Based Anomaly Detection | - PCA-Based Anomaly Detection|
+| Machine Learning ΓÇô Evaluate | - Cross Validate Model </br>- Evaluate Model </br>- Evaluate Recommender | - Cross Validate Model </br>- Evaluate Model </br> - Evaluate Recommender|
+| Machine Learning ΓÇô Train| - Sweep Clustering </br> - Train Anomaly Detection Model </br>- Train Clustering Model </br> - Train Matchbox Recommender -</br> Train Model </br>- Tune Model Hyperparameters| - Train Anomaly Detection Model </br> - Train Clustering Model </br> - Train Model -</br> - Train PyTorch Model </br>- Train SVD Recommender </br>- Train Wide and Deep Recommender </br>- Tune Model Hyperparameters|
+| Machine Learning ΓÇô Score| - Apply Transformation </br>- Assign Data to clusters </br>- Score Matchbox Recommender </br> - Score Model|- Apply Transformation </br> - Assign Data to clusters </br> - Score Image Model </br> - Score Model </br>- Score SVD Recommender </br> -Score Wide and Deep Recommender|
+| OpenCV Library Modules| - Import Images </br>- Pre-trained Cascade Image Classification | |
+| Python Language Modules| - Execute Python Script| - Execute Python Script </br> - Create Python Model |
+| R Language Modules | - Execute R Script </br> - Create R Model| - Execute R Script|
+| Statistical Functions | - Apply Math Operation </br>- Compute Elementary Statistics </br>- Compute Linear Correlation </br>- Evaluate Probability Function </br>- Replace Discrete Values </br>- Summarize Data </br>- Test Hypothesis using t-Test| - Apply Math Operation </br>- Summarize Data|
+| Text Analytics| - Detect Languages </br>- Extract Key Phrases from Text </br>- Extract N-Gram Features from Text </br>- Feature Hashing </br>- Latent Dirichlet Allocation </br>- Named Entity Recognition </br>- Preprocess Text </br>- Score Vowpal Wabbit Version 7-10 Model </br>- Score Vowpal Wabbit Version 8 Model </br>- Train Vowpal Wabbit Version 7-10 Model </br>- Train Vowpal Wabbit Version 8 Model |- Convert Word to Vector </br> - Extract N-Gram Features from Text </br>- Feature Hashing </br>- Latent Dirichlet Allocation </br>- Preprocess Text </br>- Score Vowpal Wabbit Model </br> - Train Vowpal Wabbit Model|
+| Time Series| - Time Series Anomaly Detection | |
+| Web Service | - Input </br> - Output | - Input </br> - Output|
+| Computer Vision| | - Apply Image Transformation </br> - Convert to Image Directory </br> - Init Image Transformation </br> - Split Image Directory </br> - DenseNet Image Classification </br>- ResNet Image Classification |
+
+For more information on how to use individual the designer modules, see the [designer module reference](../algorithm-module-reference/module-reference.md).
+
+### What if a designer module is missing?
+
+Azure Machine Learning designer contains the most popular modules from Studio (classic). It also includes new modules that take advantage of the latest machine learning techniques.
+
+If your migration is blocked due to missing modules in the designer, contact us by [creating a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Example migration
+
+The following experiment migration highlights some of the differences between Studio (classic) and Azure Machine Learning.
+
+### Datasets
+
+In Studio (classic), **datasets** were saved in your workspace and could only be used by Studio (classic).
+
+![automobile-price-classic-dataset](./media/migrate-overview/studio-classic-dataset.png)
+
+In Azure Machine Learning, **datasets** are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Secure data access](../concept-data.md#reference-data-in-storage-with-datasets).
+
+![automobile-price-aml-dataset](./media/migrate-overview/aml-dataset.png)
+
+### Pipeline
+
+In Studio (classic), **experiments** contained the processing logic for your work. You created experiments with drag-and-drop modules.
++
+![automobile-price-classic-experiment](./media/migrate-overview/studio-classic-experiment.png)
+
+In Azure Machine Learning, **pipelines** contain the processing logic for your work. You can create pipelines with either drag-and-drop modules or by writing code.
+
+![automobile-price-aml-pipeline](./media/migrate-overview/aml-pipeline.png)
+
+### Web service endpoint
+
+In Studio (classic), the **REQUEST/RESPOND API** was used for real-time prediction. The **BATCH EXECUTION API** was used for batch prediction or retraining.
+
+![automobile-price-classic-webservice](./media/migrate-overview/studio-classic-web-service.png)
+
+In Azure Machine Learning, **real-time endpoints** are used for real-time prediction. **Pipeline endpoints** are used for batch prediction or retraining.
+
+![automobile-price-aml-endpoint](./media/migrate-overview/aml-endpoint.png)
++
+## Next steps
+
+In this article, you learned the high-level requirements for migrating to Azure Machine Learning. For detailed steps, see the other articles in the Studio (classic) migration series:
+
+1. **Migration overview**.
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
++++++
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-experiment.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild experiment'
+description: Rebuild Studio (classic) experiments in Azure Machine Learning designer.
+++++++ Last updated : 03/08/2021++
+# Rebuild a Studio (classic) experiment in Azure Machine Learning
+
+In this article, you learn how to rebuild a Studio (classic) experiment in Azure Machine Learning. For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
+
+Studio (classic) **experiments** are similar to **pipelines** in Azure Machine Learning. However, in Azure Machine Learning pipelines are built on the same back-end that powers the SDK. This means that you have two options for machine learning development: the drag-and-drop designer or code-first SDKs.
+
+For more information on building pipelines with the SDK, see [What are Azure Machine Learning pipelines](../concept-ml-pipelines.md#building-pipelines-with-the-python-sdk).
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md#create-a-workspace).
+- A Studio (classic) experiment to migrate.
+- [Upload your dataset](migrate-register-dataset.md) to Azure Machine Learning.
+
+## Rebuild the pipeline
+
+After you [migrate your dataset to Azure Machine Learning](migrate-register-dataset.md), you're ready to recreate your experiment.
+
+In Azure Machine Learning, the visual graph is called a **pipeline draft**. In this section, you recreate your classic experiment as a pipeline draft.
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com))
+1. In the left navigation pane, select **Designer** > **Easy-to-use prebuilt modules**
+ ![Screenshot showing how to create a new pipeline draft.](../media/tutorial-designer-automobile-price-train-score/launch-designer.png)
+
+1. Manually rebuild your experiment with designer modules.
+
+ Consult the [module-mapping table](migrate-overview.md#studio-classic-and-designer-module-mapping) to find replacement modules. Many of Studio (classic)'s most popular modules have identical versions in the designer.
+
+ > [!Important]
+ > If your experiment uses the Execute R Script module, you need to perform additional steps to migrate your experiment. For more information, see [Migrate R Script modules](migrate-execute-r-script.md).
+
+1. Adjust parameters.
+
+ Select each module and adjust the parameters in the module settings panel to the right. Use the parameters to recreate the functionality of your Studio (classic) experiment. For more information on each module, see the [module reference](../algorithm-module-reference/module-reference.md).
+
+## Submit a run and check results
+
+After you recreate your Studio (classic) experiment, it's time to submit a **pipeline run**.
+
+A pipeline run executes on a **compute target** attached to your workspace. You can set a default compute target for the entire pipeline, or you can specify compute targets on a per-module basis.
+
+Once you submit a run from a pipeline draft, it turns into a **pipeline run**. Each pipeline run is recorded and logged in Azure Machine Learning.
+
+To set a default compute target for the entire pipeline:
+1. Select the **Gear icon** ![Gear icon in the designer](../media/tutorial-designer-automobile-price-train-score/gear-icon.png) next to the pipeline name.
+1. Select **Select compute target**.
+1. Select an existing compute, or create a new compute by following the on-screen instructions.
+
+Now that your compute target is set, you can submit a pipeline run:
+
+1. At the top of the canvas, select **Submit**.
+1. Select **Create new** to create a new experiment.
+
+ Experiments organize similar pipeline runs together. If you run a pipeline multiple times, you can select the same experiment for successive runs. This is useful for logging and tracking.
+1. Enter an experiment name. Then, select **Submit**.
+
+ The first run may take up to 20 minutes. Since the default compute settings have a minimum node size of 0, the designer must allocate resources after being idle. Successive runs take less time, since the nodes are already allocated. To speed up the running time, you can create a compute resources with a minimum node size of 1 or greater.
+
+After the run finishes, you can check the results of each module:
+
+1. Right-click the module whose output you want to see.
+1. Select either **Visualize**, **View Output**, or **View Log**.
+
+ - **Visualize**: Preview the results dataset.
+ - **View Output**: Open a link to the output storage location. Use this to explore or download the output.
+ - **View Log**: View driver and system logs. Use the **70_driver_log** to see information related to your user-submitted script such as errors and exceptions.
+
+> [!IMPORTANT]
+> Designer modules use open source Python packages, compared to C# packages in Studio (classic). As a result, module output may vary slightly between the designer and Studio (classic).
++
+## Next steps
+
+In this article, you learned how to rebuild a Studio (classic) experiment in Azure Machine Learning. The next step is to [rebuild web services in Azure Machine Learning](migrate-rebuild-web-service.md).
++
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. **Rebuild a Studio (classic) training pipeline**.
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-integrate-with-client-app.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Consume pipeline endpoints'
+description: Integrate pipeline endpoints with client applications in Azure Machine Learning.
+++++++ Last updated : 03/08/2021++
+# Consume pipeline endpoints from client applications
+
+In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](../how-to-consume-web-service.md).
+
+This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md#create-a-workspace).
+- An [Azure Machine Learning real-time endpoint or pipeline endpoint](migrate-rebuild-web-service.md).
++
+## Consume a real-time endpoint
+
+If you deployed your model as a **real-time endpoint**, you can find its REST endpoint, and pre-generated consumption code in C#, Python, and R:
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
+1. Go the **Endpoints** tab.
+1. Select your real-time endpoint.
+1. Select **Consume**.
+
+> [!NOTE]
+> You can also find the Swagger specification for your endpoint in the **Details** tab. Use the Swagger definition to understand your endpoint schema. For more information on Swagger definition, see [Swagger official documentation](https://swagger.io/docs/specification/2-0/what-is-swagger/).
++
+## Consume a pipeline endpoint
+
+There are two ways to consume a pipeline endpoint:
+
+- REST API calls
+- Integration with Azure Data Factory
+
+### Use REST API calls
+
+Call the REST endpoint from your client application. You can use the Swagger specification for your endpoint to understand its schema:
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
+1. Go the **Endpoints** tab.
+1. Select **Pipeline endpoints**.
+1. Select your pipeline endpoint.
+1. In the **Pipeline endpoint overview** pane, select the link under **REST endpoint documentation**.
+
+### Use Azure Data Factory
+
+You can call your Azure Machine Learning pipeline as a step in an Azure Data Factory pipeline. For more information, see [Execute Azure Machine Learning pipelines in Azure Data Factory](../../data-factory/transform-data-machine-learning-service.md).
++
+## Next steps
+
+In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](../how-to-consume-web-service.md).
+
+See the rest of the articles in the Azure Machine Learning migration series:
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. **Integrate an Azure Machine Learning web service with client apps**.
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-rebuild-web-service.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild web service'
+description: Rebuild Studio (classic) web services as pipeline endpoints in Azure Machine Learning
+++++++ Last updated : 03/08/2021++
+# Rebuild a Studio (classic) web service in Azure Machine Learning
+
+In this article, you learn how to rebuild a Studio (classic) web service as an **endpoint** in Azure Machine Learning.
+
+Use Azure Machine Learning pipeline endpoints to make predictions, retrain models, or run any generic pipeline. The REST endpoint lets you run pipelines from any platform.
+
+This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see the [migration overview article](migrate-overview.md).
+
+> [!NOTE]
+> This migration series focuses on the drag-and-drop designer. For more information on deploying models programmatically, see [Deploy machine learning models in Azure](../how-to-deploy-and-where.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning training pipeline. For more information, see [Rebuild a Studio (classic) experiment in Azure Machine Learning](migrate-rebuild-experiment.md).
+
+## Real-time endpoint vs pipeline endpoint
+
+Studio (classic) web services have been replaced by **endpoints** in Azure Machine Learning. Use the following table to choose which endpoint type to use:
+
+|Studio (classic) web service| Azure Machine Learning replacement
+|||
+|Request/respond web service (real-time prediction)|Real-time endpoint|
+|Batch web service (batch prediction)|Pipeline endpoint|
+|Retraining web service (retraining)|Pipeline endpoint|
++
+## Deploy a real-time endpoint
+
+In Studio (classic), you used a **REQUEST/RESPOND web service** to deploy a model for real-time predictions. In Azure Machine Learning, you use a **real-time endpoint**.
+
+There are multiple ways to deploy a model in Azure Machine Learning. One of the simplest ways is to use the designer to automate the deployment process. Use the following steps to deploy a model as a real-time endpoint:
+
+1. Run your completed training pipeline at least once.
+1. After the run completes, at the top of the canvas, select **Create inference pipeline** > **Real-time inference pipeline**.
+
+ ![create realtime inference pipeline](./media/migrate-rebuild-web-service/create-inference-pipeline.png)
+
+ The designer converts the training pipeline into a real-time inference pipeline. A similar conversion also occurs in Studio (classic).
+
+ In the designer, the conversion step also [registers the trained model to your Azure Machine Learning workspace](../how-to-deploy-and-where.md#registermodel).
+
+1. Select **Submit** to run the real-time inference pipeline, and verify that it runs successfully.
+
+1. After you verify the inference pipeline, select **Deploy**.
+
+1. Enter a name for your endpoint and a compute type.
+
+ The following table describes your deployment compute options in the designer:
+
+ | Compute target | Used for | Description | Creation |
+ | -- | -- | -- | -- |
+ |[Azure Kubernetes Service (AKS)](../how-to-deploy-azure-kubernetes-service.md) |Real-time inference|Large-scale, production deployments. Fast response time and service autoscaling.| User-created. For more information, see [Create compute targets](../how-to-create-attach-compute-studio.md#inference-clusters). |
+ |[Azure Container Instances ](../how-to-deploy-azure-container-instance.md)|Testing or development | Small-scale, CPU-based workloads that require less than 48 GB of RAM.| Automatically created by Azure Machine Learning.
+
+### Test the real-time endpoint
+
+After deployment completes, you can see more details and test your endpoint:
+
+1. Go the **Endpoints** tab.
+1. Select you endpoint.
+1. Select the **Test** tab.
+
+ ![Screenshot showing the Endpoints tab with the Test endpoint button](./media/migrate-rebuild-web-service/test-realtime-endpoint.png)
+
+## Publish a pipeline endpoint for batch prediction or retraining
+
+You can also use your training pipeline to create a **pipeline endpoint** instead of a real-time endpoint. Use **pipeline endpoints** to perform either batch prediction or retraining.
+
+Pipeline endpoints replace Studio (classic) **batch execution endpoints** and **retraining web services**.
+
+### Publish a pipeline endpoint for batch prediction
+
+Publishing a batch prediction endpoint is similar to the real-time endpoint.
+
+Use the following steps to publish a pipeline endpoint for batch prediction:
+
+1. Run your completed training pipeline at least once.
+
+1. After the run completes, at the top of the canvas, select **Create inference pipeline** > **Batch inference pipeline**.
+
+ ![Screenshot showing the create inference pipeline button on a training pipeline](./media/migrate-rebuild-web-service/create-inference-pipeline.png)
+
+ The designer converts the training pipeline into a batch inference pipeline. A similar conversion also occurs in Studio (classic).
+
+ In the designer, this step also [registers the trained model to your Azure Machine Learning workspace](../how-to-deploy-and-where.md#registermodel).
+
+1. Select **Submit** to run the batch inference pipeline and verify that it successfully completes.
+
+1. After you verify the inference pipeline, select **Publish**.
+
+1. Create a new pipeline endpoint or select an existing one.
+
+ A new pipeline endpoint creates a new REST endpoint for your pipeline.
+
+ If you select an existing pipeline endpoint, you don't overwrite the existing pipeline. Instead, Azure Machine Learning versions each pipeline in the endpoint. You can specify which version to run in your REST call. You must also set a default pipeline if the REST call doesn't specify a version.
++
+ ### Publish a pipeline endpoint for retraining
+
+To publish a pipeline endpoint for retraining, you must already have a pipeline draft that trains a model. For more information on building a training pipeline, see [Rebuild a Studio (classic) experiment](migrate-rebuild-experiment.md).
+
+To reuse your pipeline endpoint for retraining, you must create a **pipeline parameter** for your input dataset. This lets you dynamically set your training dataset, so that you can retrain your model.
+
+Use the following steps to publish retraining pipeline endpoint:
+
+1. Run your training pipeline at least once.
+1. After the run completes, select the dataset module.
+1. In the module details pane, select **Set as pipeline parameter**.
+1. Provide a descriptive name like "InputDataset".
+
+ ![Screenshot highlighting how to create a pipeline parameter](./media/migrate-rebuild-web-service/create-pipeline-parameter.png)
+
+ This creates a pipeline parameter for your input dataset. When you call your pipeline endpoint for training, you can specify a new dataset to retrain the model.
+
+1. Select **Publish**.
+
+ ![Screenshot highlighting the Publish button on a training pipeline](./media/migrate-rebuild-web-service/create-retraining-pipeline.png)
++
+## Call your pipeline endpoint from the studio
+
+After you create your batch inference or retraining pipeline endpoint, you can call your endpoint directly from your browser.
+
+1. Go to the **Pipelines** tab, and select **Pipeline endpoints**.
+1. Select the pipeline endpoint you want to run.
+1. Select **Submit**.
+
+ You can specify any pipeline parameters after you select **Submit**.
+
+## Next steps
+
+In this article, you learned how to rebuild a Studio (classic) web service in Azure Machine Learning. The next step is to [integrate your web service with client apps](migrate-rebuild-integrate-with-client-app.md).
++
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. **Rebuild a Studio (classic) web service**.
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/migrate-register-dataset.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild dataset'
+description: Rebuild Studio (classic) datasets in Azure Machine Learning designer
+++++++ Last updated : 02/04/2021++
+# Migrate a Studio (classic) dataset to Azure Machine Learning
+
+In this article, you learn how to migrate a Studio (classic) dataset to Azure Machine Learning. For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
+
+You have three options to migrate a dataset to Azure Machine Learning. Read each section to determine which option is best for your scenario.
++
+|Where is the data? | Migration option |
+|||
+|In Studio (classic) | Option 1: [Download the dataset from Studio (classic) and upload it to Azure Machine Learning](#download-the-dataset-from-studio-classic). |
+|Cloud storage | Option 2: [Register a dataset from a cloud source](#import-data-from-cloud-sources). <br><br> Option 3: [Use the Import Data module to get data from a cloud source](#import-data-from-cloud-sources). |
+
+> [!NOTE]
+> Azure Machine Learning also supports [code-first workflows](../how-to-create-register-datasets.md) for creating and managing datasets.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md#create-a-workspace).
+- A Studio (classic) dataset to migrate.
++
+## Download the dataset from Studio (classic)
+
+The simplest way to migrate a Studio (classic) dataset to Azure Machine Learning is to download your dataset and register it in Azure Machine Learning. This creates a new copy of your dataset and uploads it to an Azure Machine Learning datastore.
+
+You can download the following Studio (classic) dataset types directly.
+
+* Plain text (.txt)
+* Comma-separated values (CSV) with a header (.csv) or without (.nh.csv)
+* Tab-separated values (TSV) with a header (.tsv) or without (.nh.tsv)
+* Excel file
+* Zip file (.zip)
+
+To download datasets directly:
+1. Go to your Studio (classic) workspace ([https://studio.azureml.net](https://studio.azureml.net)).
+1. In the left navigation bar, select the **Datasets** tab.
+1. Select the dataset(s) you want to download.
+1. In the bottom action bar, select **Download**.
+
+ ![Screenshot showing how to download a dataset in Studio (classic)](./media/migrate-register-dataset/download-dataset.png)
+
+For the following data types, you must use the **Convert to CSV** module to download datasets.
+
+* SVMLight data (.svmlight)
+* Attribute Relation File Format (ARFF) data (.arff)
+* R object or workspace file (.RData)
+* Dataset type (.data). Dataset type is Studio(classic) internal data type for module output.
+
+To convert your dataset to a CSV and download the results:
+
+1. Go to your Studio (classic) workspace ([https://studio.azureml.net](https://studio.azureml.net)).
+1. Create a new experiment.
+1. Drag and drop the dataset you want to download onto the canvas.
+1. Add a **Convert to CSV** module.
+1. Connect the **Convert to CSV** input port to the output port of your dataset.
+1. Run the experiment.
+1. Right-click the **Convert to CSV** module.
+1. Select **Results dataset** > **Download**.
+
+ ![Screenshot showing how to setup a convert to CSV pipeline](./media/migrate-register-dataset/csv-download-dataset.png)
+
+### Upload your dataset to Azure Machine Learning
+
+After you download the data file, you can register the dataset in Azure Machine Learning:
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
+1. In the left navigation pane, select the **Datasets** tab.
+1. Select **Create dataset** > **From local files**.
+ ![Screenshot showing the datasets tab and the button for creating a local file](./media/migrate-register-dataset/register-dataset.png)
+1. Enter a name and description.
+1. For **Dataset type**, select **Tabular**.
+
+ > [!NOTE]
+ > You can also upload ZIP files as datasets. To upload a ZIP file, select **File** for **Dataset type**.
+
+1. **For Datastore and file selection**, select the datastore you want to upload your dataset file to.
+
+ By default, Azure Machine Learning stores the dataset to the default workspace blobstore. For more information on datastores, see [Connect to storage services](../how-to-access-data.md).
+
+1. Set the data parsing settings and schema for your dataset. Then, confirm your settings.
+
+## Import data from cloud sources
+
+If your data is already in a cloud storage service, and you want to keep your data in its native location. You can use either of the following options:
+
+|Ingestion method|Description|
+|| |
+|Register an Azure Machine Learning dataset|Ingest data from local and online data sources (Blob, ADLS Gen1, ADLS Gen2, File share, SQL DB). <br><br>Creates a reference to the data source, which is lazily evaluated at runtime. Use this option if you repeatedly access this dataset and want to enable advanced data features like data versioning and monitoring.
+|Import Data module|Ingest data from online data sources (Blob, ADLS Gen1, ADLS Gen2, File share, SQL DB). <br><br> The dataset is only imported to the current designer pipeline run.
++
+>[!Note]
+> Studio (classic) users should note that the following cloud sources are not natively supported in Azure Machine Learning:
+> - Hive Query
+> - Azure Table
+> - Azure Cosmos DB
+> - On-premises SQL Database
+>
+> We recommend that users migrate their data to a supported storage services using Azure Data Factory.
+
+### Register an Azure Machine Learning dataset
+
+Use the following steps to register a dataset to Azure Machine Learning from a cloud service:
+
+1. [Create a datastore](../how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
+
+1. [Register a dataset](../how-to-connect-data-ui.md#create-datasets). If you are migrating a Studio (classic) dataset, select the **Tabular** dataset setting.
+
+After you register a dataset in Azure Machine Learning, you can use it in designer:
+
+1. Create a new designer pipeline draft.
+1. In the module palette to the left, expand the **Datasets** section.
+1. Drag your registered dataset onto the canvas.
+
+### Use the Import Data module
+
+Use the following steps to import data directly to your designer pipeline:
+
+1. [Create a datastore](https://github.com/MicrosoftDocs/azure-docs-pr/blob/master/articles/machine-learning/how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
+
+After you create the datastore, you can use the [**Import Data**](../algorithm-module-reference/import-data.md) module in the designer to ingest data from it:
+
+1. Create a new designer pipeline draft.
+1. In the module palette to the left, find the **Import Data** module and drag it to the canvas.
+1. Select the **Import Data** module, and use the settings in the right panel to configure your data source.
+
+## Next steps
+
+In this article, you learned how to migrate a Studio (classic) dataset to Azure Machine Learning. The next step is to [rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
++
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. **Migrate datasets**.
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
There are a few exceptions and limitations to choosing a VM size:
See the following table to learn more about supported series and restrictions.
-| **Supported VM series** | **Restrictions** |
-|||
-| D | None. |
-| DDSv4 | None. |
-| Dv2 | None. |
-| Dv3 | None.|
-| DSv2 | None. |
-| DSv3 | None.|
-| EAv4 | None. |
-| Ev3 | None. |
-| FSv2 | None. |
-| H | None. |
-| HB | Requires approval. |
-| HBv2 | Requires approval. |
-| HCS | Requires approval. |
-| M | Requires approval. |
-| NC | None. |
-| NC Promo | None. |
-| NCsv2 | Requires approval. |
-| NCsv3 | Requires approval. |
-| NDs | Requires approval. |
-| NDv2 | Requires approval. |
-| NV | None. |
-| NVv3 | Requires approval. |
+| **Supported VM series** | **Restrictions** | **Category** | **Supported by** |
+|||||
+| D | None. | General purpose | Compute clusters and instance |
+| DDSv4 | None. | General purpose | Compute clusters and instance |
+| Dv2 | None. | General purpose | Compute clusters and instance |
+| Dv3 | None.| General purpose | Compute clusters and instance |
+| DSv2 | None. | General purpose | Compute clusters and instance |
+| DSv3 | None.| General purpose | Compute clusters and instance |
+| EAv4 | None. | Memory optimized | Compute clusters and instance |
+| Ev3 | None. | Memory optimized | Compute clusters and instance |
+| FSv2 | None. | Compute optimized | Compute clusters and instance |
+| H | None. | High performance compute | Compute clusters and instance |
+| HB | Requires approval. | High performance compute | Compute clusters and instance |
+| HBv2 | Requires approval. | High performance compute | Compute clusters and instance |
+| HCS | Requires approval. | High performance compute | Compute clusters and instance |
+| M | Requires approval. | Memory optimized | Compute clusters and instance |
+| NC | None. | GPU | Compute clusters and instance |
+| NC Promo | None. | GPU | Compute clusters and instance |
+| NCsv2 | Requires approval. | GPU | Compute clusters and instance |
+| NCsv3 | Requires approval. | GPU | Compute clusters and instance |
+| NDs | Requires approval. | GPU | Compute clusters and instance |
+| NDv2 | Requires approval. | GPU | Compute clusters and instance |
+| NV | None. | GPU | Compute clusters and instance |
+| NVv3 | Requires approval. | GPU | Compute clusters and instance |
While Azure Machine Learning supports these VM series, they might not be available in all Azure regions. To check whether VM series are available, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
To get a featurization summary and understand what features were added to a part
> The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors. ## Register and deploy models
+You can register a model, so you can come back to it for later use.
-For details on how to download or register a model for deployment to a web service, see [how and where to deploy a model](how-to-deploy-and-where.md).
+To register a model from an automated ML run, use the [`register_model()`](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun#register-model-model-name-none--description-none--tags-none--iteration-none--metric-none-) method.
+```Python
+
+best_run, fitted_model = run.get_output()
+print(fitted_model.steps)
+
+model_name = best_run.properties['model_name']
+description = 'AutoML forecast example'
+tags = None
+
+model = remote_run.register_model(model_name = model_name,
+ description = description,
+ tags = tags)
+```
++
+For details on how to create a deployment configuration and deploy a registered model to a web service, see [how and where to deploy a model](how-to-deploy-and-where.md?tabs=python#define-a-deployment-configuration).
+
+> [!TIP]
+> For registered models, one-click deployment is available via the [Azure Machine Learning studio](https://ml.azure.com). See [how to deploy registered models from the studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
<a name="explain"></a> ## Model interpretability
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
For more information on `az ml model register`, consult the [reference documenta
For more information, see the [AutoMLRun.register_model](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun#register-model-model-name-none--description-none--tags-none--iteration-none--metric-none-) documentation.
+ To deploy a registered model from an `AutoMLRun`, we recommend doing so via the [one-click deploy button in Azure Machine learning studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
### Register a model from a local file You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file. You can use this method to register models trained with Azure Machine Learning and then downloaded. You can also use this method to register models trained outside of Azure Machine Learning.
For more information, see the documentation for [WebService.delete()](/python/ap
* [Create client applications to consume web services](how-to-consume-web-service.md) * [Update web service](how-to-deploy-update-web-service.md) * [How to deploy a model using a custom Docker image](how-to-deploy-custom-docker-image.md)
+* [One click deployment for automated ML runs in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model)
* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md) * [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md) * [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-studio-virtual-network.md
This article is part five of a five-part virtual network series. See the rest of
* [Part 1: Virtual network overview](how-to-network-security-overview.md) * [Part 2: Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Part 3: Secure the training environment](how-to-secure-training-vnet.md)
-* [Part 4: Secure the inferencing environment](how-to-secure-inferencing-vnet.md)
+* [Part 4: Secure the inferencing environment](how-to-secure-inferencing-vnet.md)
+
+Also see the article on using [custom DNS](how-to-custom-dns.md) for name resolution.
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-network-security-overview.md
This article assumes that you have familiarity with the following topics:
+ [Azure Private Link](how-to-configure-private-link.md) + [Network Security Groups (NSG)](../virtual-network/network-security-groups-overview.md) + [Network firewalls](../firewall/overview.md)- ## Example scenario In this section, you learn how a common network scenario is set up to secure Azure Machine Learning communication with private IP addresses.
The next five sections show you how to secure the network scenario described abo
1. Secure the [**training environment**](#secure-the-training-environment). 1. Secure the [**inferencing environment**](#secure-the-inferencing-environment). 1. Optionally: [**enable studio functionality**](#optional-enable-studio-functionality).
-1. Configure [**firewall settings**](#configure-firewall-settings)
-
+1. Configure [**firewall settings**](#configure-firewall-settings).
+1. Configure [DNS name resolution](#custom-dns).
## Secure the workspace and associated resources Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network.
This article is part one of a five-part virtual network series. See the rest of
* [Part 3: Secure the training environment](how-to-secure-training-vnet.md) * [Part 4: Secure the inferencing environment](how-to-secure-inferencing-vnet.md) * [Part 5: Enable studio functionality](how-to-enable-studio-virtual-network.md)+
+Also see the article on using [custom DNS](how-to-custom-dns.md) for name resolution.
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-inferencing-vnet.md
This article is part four of a five-part virtual network series. See the rest of
* [Part 1: Virtual network overview](how-to-network-security-overview.md) * [Part 2: Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Part 3: Secure the training environment](how-to-secure-training-vnet.md)
-* [Part 5: Enable studio functionality](how-to-enable-studio-virtual-network.md)
+* [Part 5: Enable studio functionality](how-to-enable-studio-virtual-network.md)
+
+Also see the article on using [custom DNS](how-to-custom-dns.md) for name resolution.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
This article is part three of a five-part virtual network series. See the rest o
* [Part 2: Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Part 4: Secure the inferencing environment](how-to-secure-inferencing-vnet.md) * [Part 5: Enable studio functionality](how-to-enable-studio-virtual-network.md)+
+Also see the article on using [custom DNS](how-to-custom-dns.md) for name resolution.
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
This article is part two of a five-part virtual network series. See the rest of
* [Part 1: Virtual network overview](how-to-network-security-overview.md) * [Part 3: Secure the training environment](how-to-secure-training-vnet.md) * [Part 4: Secure the inferencing environment](how-to-secure-inferencing-vnet.md)
-* [Part 5: Enable studio functionality](how-to-enable-studio-virtual-network.md)
+* [Part 5: Enable studio functionality](how-to-enable-studio-virtual-network.md)
+
+Also see the article on using [custom DNS](how-to-custom-dns.md) for name resolution.
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If this pattern is expected in your time series, you can switch your primary met
1. Unzip the package 1. Deploy using the unzipped assets
+## Azure Functions application
+
+ Automated ML does not currently support Azure Functions applications.
+ ## Sample notebook failures If a sample notebook fails with an error that property, method, or library does not exist:
If this pattern is expected in your time series, you can switch your primary met
+ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md) or [how to train using Automated machine learning on a remote resource](how-to-auto-train-remote.md).
-+ Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
++ Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
For a Python code-based experience, [configure your automated machine learning e
## Get started
-1. Sign in to Azure Machine Learning at https://ml.azure.com.
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
1. Select your subscription and workspace.
To get explanations for a particular model,
Once you have the best model at hand, it is time to deploy it as a web service to predict on new data.
+>[!TIP]
+> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model](how-to-deploy-and-where.md?tabs=python#register-a-model-from-an-azure-ml-training-run-1) to the workspace.
+>
+> Once you're model is registered, find it in the studio by selecting **Models** on the left pane. Once you open your model, you can select the **Deploy** button at the top of the screen, and then follow the instructions as described in **step 2** of the **Deploy your model** section.
+ Automated ML helps you with deploying the model without writing code: 1. You have a couple options for deployment.
machine-learning Overview What Is Machine Learning Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-machine-learning-studio.md
Even if you're an experienced developer, the studio can simplify how you manage
## ML Studio (classic) vs Azure Machine Learning studio
-Released in 2015, **ML Studio (classic)** was our first drag-and-drop machine learning builder. It is a standalone service that only offers a visual experience. Studio (classic) does not interoperate with Azure Machine Learning.
+Released in 2015, **ML Studio (classic)** was our first drag-and-drop machine learning builder.
-**Azure Machine Learning** is a separate and modernized service that delivers a complete data science platform. It supports both code-first and low-code experiences.
+**ML Studio (classic)** is a standalone service that only offers a visual experience. Studio (classic) does not interoperate with Azure Machine Learning.
+
+**Azure Machine Learning** is a separate, and modernized, service that delivers a complete data science platform. It supports both code-first and low-code experiences.
**Azure Machine Learning studio** is a web portal *in* Azure Machine Learning that contains low-code and no-code options for project authoring and asset management.
-We recommend that new users choose **Azure Machine Learning**, instead of ML Studio (classic), for the latest range of data science tools.
+We recommend that new users choose **Azure Machine Learning**, instead of ML Studio (classic), for the latest range of data science tools. If you are an existing ML Studio (classic) user, consider [migrating to Azure Machine Learning](classic/migrate-overview.md).
+
+Here are some of the benefits of switching to Azure Machine Learning:
+
+- Scalable compute clusters for large-scale training.
+- Enterprise security and governance.
+- Interoperable with popular open-source tools.
+- End-to-end MLOps.
### Feature comparison
-The following table summarizes the key differences between ML Studio (classic) and Azure Machine Learning.
-
-| Feature | ML Studio (classic) | Azure Machine Learning |
-|| | |
-| Drag and drop interface | Classic experience | Updated experience - [Azure Machine Learning designer](concept-designer.md)|
-| Code SDKs | Unsupported | Fully integrated with [Azure Machine Learning Python](/python/api/overview/azure/ml/) and [R](https://github.com/Azure/azureml-sdk-for-r) SDKs |
-| Experiment | Scalable (10-GB training data limit) | Scale with compute target |
-| Training compute targets | Proprietary compute target, CPU support only | Wide range of customizable [training compute targets](concept-compute-target.md#train). Includes GPU and CPU support |
-| Deployment compute targets | Proprietary web service format, not customizable | Wide range of customizable [deployment compute targets](concept-compute-target.md#deploy). Includes GPU and CPU support |
-| ML Pipeline | Not supported | Build flexible, modular [pipelines](concept-ml-pipelines.md) to automate workflows |
-| MLOps | Basic model management and deployment; CPU only deployments | Entity versioning (model, data, workflows), workflow automation, integration with CICD tooling, CPU and GPU deployments [and more](concept-model-management-and-deployment.md) |
-| Model format | Proprietary format, Studio (classic) only | Multiple supported formats depending on training job type |
-| Automated model training and hyperparameter tuning | Not supported | [Supported](concept-automated-ml.md). Code-first and no-code options. |
-| Data drift detection | Not supported | [Supported](how-to-monitor-datasets.md) |
-| Data labeling projects | Not supported | [Supported](how-to-create-labeling-projects.md) |
## Troubleshooting
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/create-cluster-cli.md
Previously updated : 03/02/2021 Last updated : 03/15/2021 # Quickstart: Create an Azure Managed Instance for Apache Cassandra cluster using Azure CLI (Preview)
This quickstart demonstrates how to use the Azure CLI commands to create a clust
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. > [!IMPORTANT]
-> This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
+> This article requires the Azure CLI version 2.17.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
## <a id="create-cluster"></a>Create a managed instance cluster
This quickstart demonstrates how to use the Azure CLI commands to create a clust
> [!NOTE] > The `assignee` and `role` values in the previous command are fixed values, enter these values exactly as mentioned in the command. Not doing so will lead to errors when creating the cluster. If you encounter any errors when executing this command, you may not have permissions to run it, please reach out to your admin for permissions.
-1. Next create the cluster in your newly created Virtual Network. Run the following command and make sure that you use the `Resource ID` value retrieved in the previous command as the value of `delegatedManagementSubnetId` variable:
+1. Next create the cluster in your newly created Virtual Network by using the [az managed-cassandra cluster create](/cli/azure/ext/cosmosdb-preview/managed-cassandra/cluster?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_cluster_create) command. Run the following command and make sure that you use the `Resource ID` value retrieved in the previous command as the value of `delegatedManagementSubnetId` variable:
```azurecli-interactive resourceGroupName='<Resource_Group_Name>'
This quickstart demonstrates how to use the Azure CLI commands to create a clust
--debug ```
-1. Finally, create a datacenter for the cluster, with three nodes:
+1. Finally, create a datacenter for the cluster, with three nodes by using the [az managed-cassandra datacenter create](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_create) command:
```azurecli-interactive dataCenterName='dc1'
This quickstart demonstrates how to use the Azure CLI commands to create a clust
--node-count 3 ```
-1. Once the datacenter is created, if you want to scale up, or scale down the nodes in the datacenter, run the following command. Change the value of `node-count` parameter to the desired value:
+1. Once the datacenter is created, if you want to scale up, or scale down the nodes in the datacenter, run the [az managed-cassandra datacenter update](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_update) command. Change the value of `node-count` parameter to the desired value:
```azurecli-interactive resourceGroupName='<Resource_Group_Name>'
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/manage-resources-cli.md
description: Learn about the common commands to automate the management of your
Previously updated : 03/02/2021 Last updated : 03/15/2021
This article describes common commands to automate the management of your Azure
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] > [!IMPORTANT]
-> This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
+> This article requires the Azure CLI version 2.17.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
> > Manage Azure Managed Instance for Apache Cassandra resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
The following sections demonstrate how to manage Azure Managed Instance for Apac
### <a id="create-cluster"></a>Create a managed instance cluster
-Create an Azure Managed Instance for Apache Cassandra cluster:
+Create an Azure Managed Instance for Apache Cassandra cluster by using the [az managed-cassandra cluster create](/cli/azure/ext/cosmosdb-preview/managed-cassandra/cluster?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_cluster_create) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra cluster create \
### <a id="delete-cluster"></a>Delete a managed instance cluster
-Delete a cluster:
+Delete a cluster by using the [az managed-cassandra cluster delete](/cli/azure/ext/cosmosdb-preview/managed-cassandra/cluster?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_cluster_delete) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra cluster delete \
### <a id="get-cluster-details"></a>Get the cluster details
-Get cluster details:
+Get cluster details by using the [az managed-cassandra cluster show](/cli/azure/ext/cosmosdb-preview/managed-cassandra/cluster?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_cluster_show) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra cluster show \
### <a id="get-cluster-status"></a>Get the cluster node status
-Get cluster details:
+Get cluster details by using the [az managed-cassandra cluster node-status](/cli/azure/ext/cosmosdb-preview/managed-cassandra/cluster?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_cluster_node_status) command:
```azurecli-interactive clusterName='cassandra-hybrid-cluster'
az managed-cassandra cluster node-status \
### <a id="list-clusters-resource-group"></a>List the clusters by resource group
-List clusters by resource group:
+List clusters by resource group by using the [az managed-cassandra cluster list](/cli/azure/ext/cosmosdb-preview/managed-cassandra/cluster?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_cluster_list) command:
```azurecli-interactive subscriptionId='MySubscriptionId'
az managed-cassandra cluster list\
### <a id="list-clusters-subscription"></a>List clusters by subscription ID
-List clusters by subscription ID:
+List clusters by subscription ID by using the [az managed-cassandra cluster list](/cli/azure/ext/cosmosdb-preview/managed-cassandra?view=azure-cli-latest&preserve-view=true) command:
```azurecli-interactive # set your subscription id
The following sections demonstrate how to manage Azure Managed Instance for Apac
### <a id="create-datacenter"></a>Create a datacenter
-Create a datacenter:
+Create a datacenter by using the [az managed-cassandra datacenter create](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_create) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra datacenter create \
### <a id="delete-datacenter"></a>Delete a datacenter
-Delete a datacenter:
+Delete a datacenter by using the [az managed-cassandra datacenter delete](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_delete) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra datacenter delete \
### <a id="get-datacenter-details"></a>Get datacenter details
-Get datacenter details:
+Get datacenter details by using the [az managed-cassandra datacenter show](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_show) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra datacenter show \
### <a id="update-datacenter"></a>Update or scale a datacenter
-Update or scale a datacenter (to scale change nodeCount value):
+Update or scale a datacenter (to scale change nodeCount value) by using the [az managed-cassandra datacenter update](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_update) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra datacenter update \
### <a id="get-datacenters-cluster"></a>Get the datacenters in a cluster
-Get datacenters in a cluster:
+Get datacenters in a cluster by using the [az managed-cassandra datacenter list](/cli/azure/ext/cosmosdb-preview/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#ext_cosmosdb_preview_az_managed_cassandra_datacenter_list) command:
```azurecli-interactive resourceGroupName='MyResourceGroup'
az managed-cassandra datacenter list \
## Next steps * [Create a managed instance cluster from the Azure portal](create-cluster-portal.md)
-* [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
+* [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
mariadb Howto Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-migrate-dump-restore.md
description: This article explains two common ways to back up and restore databa
+ Last updated 2/27/2020
marketplace Test Drive Azure Subscription Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-drive-azure-subscription-setup.md
This article explains how to set up an Azure Marketplace subscription and **Dyna
5. Under Supported account types, select **Account in any organization directory and personal Microsoft accounts**. 6. Select **Create** and wait for your app to be created. 7. Once the app is created, note the **Application ID** displayed on the overview screen. You will need this value later when configuring your test drive.
- 8. To add a nativeclient redirect URI, select the **Authentication** blade. Under **Platform configuration**, select **Add Platform** > **Mobile** > **Desktop** application tile. Choose the **nativeclient** redirect URI and select **Configure**.
-
- :::image type="content" source="./media/test-drive/configure-desktop-devices.png" alt-text="Adding a nativeclient redirect URI.":::
-
- 9. Under **Manage Application**, select **API permissions**.
- 10. Select **Add a permission** and then **Microsoft Graph API**.
- 11. Select the **Application** permission category and then the **Directory.Read.All** and **Directory.ReadWrite.All** permissions.
+ 8. Under **Manage Application**, select **API permissions**.
+ 9. Select **Add a permission** and then **Microsoft Graph API**.
+ 10. Select the **Application** permission category and then the **User.ReadWrite.All**, **Directory.Read.All** and **Directory.ReadWrite.All** permissions.
:::image type="content" source="./media/test-drive/microsoft-graph.png" alt-text="Setting the application permissions.":::
- 12. To add **Dynamics CRM - User impersonation** access for allow list Azure AD app, select **Add permission** again.
-
- :::image type="content" source="./media/test-drive/request-api-permissions.png" alt-text="Requesting the application permissions.":::
-
- 13. Once the permission is added, select **Grant admin consent for Microsoft**.
- 14. From the message alert, select **Yes**.
+ 11. Once the permission is added, select **Grant admin consent for Microsoft**.
+ 12. From the message alert, select **Yes**.
[![Shows the application permissions successfully granted.](media/test-drive/api-permissions-confirmation-customer.png)](media/test-drive/api-permissions-confirmation-customer.png#lightbox)
- 15. To generate a secret for the Azure AD App:
+ 13. To generate a secret for the Azure AD App:
1. From **Manage Application**, select **Certificate and secrets**. 2. Under Client secrets, select **New client secret**. 3. Enter a description, such as *Test Drive*, and select an appropriate duration. The test drive will break once this Key expires, at which point you will need to generate and provide AppSource a new key.
This article explains how to set up an Azure Marketplace subscription and **Dyna
:::image type="content" source="./media/test-drive/add-client-secret.png" alt-text="Adding a client secret.":::
-5. Sometimes it takes longer than expected to sync a user from Azure AD to a CRM instance. To aid with this, we added a process to force sync user, but it requires the Azure AD application to be allowlisted by Partner Center. To do this, see [User sync to Customer Engagement instance](https://github.com/microsoft/AppSource/blob/master/Microsoft%20Hosted%20Test%20Drive/CDS_Utility_to_ForceUserSync_in_CRM_Instance.md).
-6. Add the Service Principal role to the application to allow the Azure AD app to remove users from your Azure tenant.
+5. Add the Service Principal role to the application to allow the Azure AD app to remove users from your Azure tenant.
1. Open an Administrative-level PowerShell command prompt. 2. Install-Module MSOnline (run this command if MSOnline is not installed). 3. Connect-MsolService (this will display a popup window; sign in with the newly created org tenant).
This article explains how to set up an Azure Marketplace subscription and **Dyna
:::image type="content" source="./media/test-drive/sign-in-to-account.png" alt-text="Signing in to your account.":::
-7. Add the above created Azure app as an application user to your test drive CRM instance.
+6. Add the above created Azure app as an application user to your test drive CRM instance.
1. Add a new user in **Azure Active Directory**. Only **Name** and **Username** values (belonging to the same tenant) are required to create this user, leave the other fields as default. Copy the username value. 2. Sign into **CRM instance** and select **Setting** > **Security** > **Users**. 3. Change the view to **Application Users**.
This article explains how to set up an Azure Marketplace subscription and **Dyna
:::image type="content" source="./media/test-drive/security-roles-selection.png" alt-text="Selecting the role privileges.":::
- 10. Assign the application user the custom security role you created for your test drive.
+ 10. Also, enable the **Act on Behalf of Another User** privilege.
+ 11. Assign the application user the custom security role you created for your test drive.
## Set up for Dynamics 365 for Operations
media-services Analyzing Video Audio Files Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/analyzing-video-audio-files-concept.md
editor: ''
Previously updated : 10/21/2020 Last updated : 03/15/2021
Media Services currently supports the following built-in analyzer presets:
|**Preset name**|**Scenario**|**Details**| ||||
-|[AudioAnalyzerPreset](/rest/api/media/transforms/createorupdate#audioanalyzerpreset)|Analyzing audio Standard|The preset applies a predefined set of AI-based analysis operations, including speech transcription. Currently, the preset supports processing content with a single audio track that contains speech in a single language. You can specify the language for the audio payload in the input using the BCP-47 format of 'language tag-region'. Supported languages are English ('en-US' and 'en-GB'), Spanish ('es-ES' and 'es-MX'), French ('fr-FR'), Italian ('it-IT'), Japanese ('ja-JP'), Portuguese ('pt-BR'), Chinese ('zh-CN'), German ('de-DE'), Arabic ('ar-EG' and 'ar-SY'), Russian ('ru-RU'), Hindi ('hi-IN'), and Korean ('ko-KR').<br/><br/> If the language isn't specified or set to null, automatic language detection chooses the first language detected and continues with the selected language for the duration of the file. The automatic language detection feature currently supports English, Chinese, French, German, Italian, Japanese, Spanish, Russian, and Portuguese. It doesn't support dynamically switching between languages after the first language is detected. The automatic language detection feature works best with audio recordings with clearly discernible speech. If automatic language detection fails to find the language, the transcription falls back to English.|
+|[AudioAnalyzerPreset](/rest/api/media/transforms/createorupdate#audioanalyzerpreset)|Analyzing audio Standard|The preset applies a predefined set of AI-based analysis operations, including speech transcription. Currently, the preset supports processing content with a single audio track that contains speech in a single language. You can specify the language for the audio payload in the input using the BCP-47 format of 'language tag-region'. Supported languages are English ('en-US', 'en-GB' and 'en-AU'), Spanish ('es-ES' and 'es-MX'), French ('fr-FR' and 'fr-CA'), Italian ('it-IT'), Japanese ('ja-JP'), Portuguese ('pt-BR'), Chinese ('zh-CN'), German ('de-DE'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-SA' and 'ar-SY'), Russian ('ru-RU'), Hindi ('hi-IN'), Korean ('ko-KR'), Danish('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish ('fi-FI'), Thai('th-TH') and Turkish('tr-TR').<br/><br/> If the language isn't specified or set to null, automatic language detection chooses the first language detected and continues with the selected language for the duration of the file. The automatic language detection feature currently supports English, Chinese, French, German, Italian, Japanese, Spanish, Russian, and Portuguese. It doesn't support dynamically switching between languages after the first language is detected. The automatic language detection feature works best with audio recordings with clearly discernible speech. If automatic language detection fails to find the language, the transcription falls back to English.|
|[AudioAnalyzerPreset](/rest/api/media/transforms/createorupdate#audioanalyzerpreset)|Analyzing audio Basic|This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription,and timing information. Automatic language detection and speaker diarization are not included in this mode. The list of supported languages is available [here](#built-in-presets)| |[VideoAnalyzerPreset](/rest/api/medi).| |[FaceDetectorPreset](/rest/api/media/transforms/createorupdate#facedetectorpreset)|Detecting faces present in video|Describes the settings to be used when analyzing a video to detect all the faces present.|
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
To stay up-to-date with the most recent developments, this article provides you
* Bug fixes * Deprecated functionality
-## Known issues
+## March 2021
+
+### New language support added to the AudioAnalyzer preset
+
+Additional languages for video transcription and subtitling are available now in the AudioAnalyzer preset (both Basic and Standard modes).
+
+* English (Australia), 'en-AU'
+* French (Canada), 'fr-CA'
+* Arabic (Bahrain) modern standard, 'ar-BH'
+* Arabic (Egypt), 'ar-EG'
+* Arabic (Iraq), 'ar-IQ'
+* Arabic (Israel), 'ar-IL'
+* Arabic (Jordan), 'ar-JO'
+* Arabic (Kuwait), 'ar-KW'
+* Arabic (Lebanon), 'ar-LB'
+* Arabic (Oman), 'ar-OM'
+* Arabic (Qatar), 'ar-QA'
+* Arabic (Saudi Arabia), 'ar-SA'
+* Danish, ΓÇÿda-DKΓÇÖ
+* Norwegian, 'nb-NO'
+* Swedish, ΓÇÿsv-SEΓÇÖ
+* Finnish, ΓÇÿfi-FIΓÇÖ
+* Thai, ΓÇÿth-THΓÇÖ
+* Turkish, ΓÇÿtr-TRΓÇÖ
+
+See the latest available languages in the [Analyzing Video And Audio Files concept article.](analyzing-video-audio-files-concept.md)
## February 2021
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-migrate-import-export.md
description: This article explains common ways to import and export databases in
+ Last updated 10/30/2020
Last updated 10/30/2020
[!INCLUDE[applies-to-single-flexible-server](includes/applies-to-single-flexible-server.md)] This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench.
+For a detailed and comprehensive migration guide, see [MySQL to Azure Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+ You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure. ## Before you begin
Add the connection information to MySQL Workbench.
> [!TIP] > For scenarios where you want to dump and restore the entire database, you should use [dump and restore](concepts-migrate-dump-restore.md) approach instead.
-Use MySQL tools to import and export databases into Azure MySQL Database in the following scenarios.
+Use MySQL tools to import and export databases into Azure MySQL Database in the following scenarios. For other tools, see page 22 of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/blob/master/MigrationGuide/MySQL%20Migration%20Guide_v1.1.pdf).
- When you need to selectively choose a few tables to import from an existing MySQL database into Azure MySQL Database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables) and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables). - When you're moving the database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate.
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-solutions/datadog/troubleshoot.md
This document contains information about troubleshooting your solutions that use Datadog.
+## Purchase errors
+
+* Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.
+
+ Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md).
+
+* The EA subscription doesn't allow Marketplace purchases.
+
+ Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
+ ## Unable to create Datadog resource To set up the Azure Datadog integration, you must have **Owner** access on the Azure subscription. Ensure you have the appropriate access before starting the setup.
postgresql Howto Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-migrate-using-dump-and-restore.md
description: Describes how to extract a PostgreSQL database into a dump file and
+ Last updated 09/22/2020
remote-rendering Manipulate Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/unity/manipulate-models/manipulate-models.md
The bounds of a model are defined by the box that contains the entire model - ju
// Create a query using the model entity async private void QueryBounds() {
- remoteBoundsQuery = targetModel.ModelEntity.QueryLocalBoundsAsync();
+ var remoteBounds = targetModel.ModelEntity.QueryLocalBoundsAsync();
CurrentBoundsState = RemoteBoundsState.Updating; await remoteBounds;
remote-rendering Materials Lighting Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/unity/materials-lighting-effects/materials-lighting-effects.md
We'll create a script that automatically creates a remote entity, adds a cut pla
{ public Color SliceColor = new Color(0.5f, 0f, 0f, .5f); public float FadeLength = 0.01f;
- public Axis SliceNormal = Axis.Y_Neg;
+ public Axis SliceNormal = Axis.NegativeY;
public bool AutomaticallyCreate = true;
remote-rendering View Remote Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md
public async void JoinRemoteSession()
else { CurrentCoordinatorState = RemoteRenderingState.ConnectingToNewRemoteSession;
- joinResult = await ARRSessionService.StartSession(new RenderingSessionCreationOptions(renderingSessionVmSize, maxLeaseHours, maxLeaseMinutes));
+ joinResult = await ARRSessionService.StartSession(new RenderingSessionCreationOptions(renderingSessionVmSize, (int)maxLeaseHours, (int)maxLeaseMinutes));
} if (joinResult.Status == RenderingSessionStatus.Ready || joinResult.Status == RenderingSessionStatus.Starting)
The **LoadModel** method is designed to accept a model path, progress handler, a
#endif //Load a model that will be parented to the entity
- var loadModelParams = new LoadModelFromSasParams(modelPath, modelEntity);
+ var loadModelParams = new LoadModelFromSasOptions(modelPath, modelEntity);
var loadModelAsync = ARRSessionService.CurrentActiveSession.Connection.LoadModelFromSasAsync(loadModelParams, progress); var result = await loadModelAsync; return modelEntity;
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/overview.md
Azure networking supports various secure remote access scenarios. Some of these
- [Connect Azure Virtual Networks to each other](../../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md)
+### Azure Private Link
+
+[Azure Private Link](https://azure.microsoft.com/services/private-link/) enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services privately in your virtual network over a [private endpoint](https://docs.microsoft.com/azure/private-link/private-endpoint-overview). Setup and consumption using Azure Private Link is consistent across Azure PaaS, customer-owned, and shared partner services. Traffic from your virtual network to the Azure service always remains on the Microsoft Azure backbone network.
+
+[Private Endpoints](https://docs.microsoft.com/azure/private-link/private-endpoint-overview) allow you to secure your critical Azure service resources to only your virtual networks. Azure Private Endpoint uses a private IP address from your VNet to connect you privately and securely to a service powered by Azure Private Link, effectively bringing the service into your VNet. Exposing your virtual network to the public internet is no longer necessary to consume services on Azure.
+
+You can also create your own private link service in your virtual network. [Azure Private Link service](https://docs.microsoft.com/azure/private-link/private-link-service-overview) is the reference to your own service that is powered by Azure Private Link. Your service that is running behind Azure Standard Load Balancer can be enabled for Private Link access so that consumers to your service can access it privately from their own virtual networks. Your customers can create a private endpoint inside their virtual network and map it to this service. Exposing your service to the public internet is no longer necessary to render services on Azure.
+ ### VPN Gateway To send network traffic between your Azure Virtual Network and your on-premises site, you must create a VPN gateway for your Azure Virtual Network. A [VPN gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) is a type of virtual network gateway that sends encrypted traffic across a public connection. You can also use VPN gateways to send traffic between Azure Virtual Networks over the Azure network fabric.
Microsoft uses multiple security practices and technologies across its products
- Understand your [shared responsibility in the cloud](shared-responsibility.md). -- Learn how [Azure Security Center](../../security-center/security-center-introduction.md) can help you prevent, detect, and respond to threats with increased visibility and control over the security of your Azure resources.
+- Learn how [Azure Security Center](../../security-center/security-center-introduction.md) can help you prevent, detect, and respond to threats with increased visibility and control over the security of your Azure resources.
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-stateless-node-types.md
To set one or more node types as stateless in a cluster resource, set the **isSt
To enable stateless node types, you should configure the underlying virtual machine scale set resource in the following way: * The value **singlePlacementGroup** property, which should be set to **false** if you require to scale to more than 100 VMs.
-* The Scale set's **upgradePolicy** which **mode** should be set to **Rolling**.
+* The Scale set's **upgradePolicy** **mode** should be set to **Rolling**.
* Rolling Upgrade Mode requires Application Health Extension or Health probes configured. Configure health probe with default configuration for Stateless Node types as suggested below. Once applications are deployed to the node type, Health Probe/Health extension ports can be changed to monitor application health.
+>[!NOTE]
+> It is required that the platform fault domain count is updated to 5 when a stateless node type is backed by a virtual machine scale set which is spanning multiple zones. Please see this [template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-2-NodeTypes-Windows-Stateless-CrossAZ-Secure) for more details.
+>
+> **platformFaultDomainCount:5**
```json { "apiVersion": "2018-10-01",
service-fabric Service Fabric Windows Cluster Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-windows-cluster-windows-security.md
To prevent unauthorized access to a Service Fabric cluster, you must secure the
> ## Configure Windows security using gMSA
-The sample *ClusterConfig.gMSA.Windows.MultiMachine.JSON* configuration file downloaded with the [Microsoft.Azure.ServiceFabric.WindowsServer.\<version>.zip](https://go.microsoft.com/fwlink/?LinkId=730690) standalone cluster package contains a template for configuring Windows security using [Group Managed Service Account (gMSA)](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)):
+gMSA is the preferred security model. The sample *ClusterConfig.gMSA.Windows.MultiMachine.JSON* configuration file downloaded with the [Microsoft.Azure.ServiceFabric.WindowsServer.\<version>.zip](https://go.microsoft.com/fwlink/?LinkId=730690) standalone cluster package contains a template for configuring Windows security using [Group Managed Service Account (gMSA)](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)):
``` "security": {
The following example **security** section configures Windows security using gMS
``` ## Configure Windows security using a machine group
-This model is being deprecated. The recommendation is to use gMSA as detailed above. The sample *ClusterConfig.Windows.MultiMachine.JSON* configuration file downloaded with the [Microsoft.Azure.ServiceFabric.WindowsServer.\<version>.zip](https://go.microsoft.com/fwlink/?LinkId=730690) standalone cluster package contains a template for configuring Windows security. Windows security is configured in the **Properties** section:
+As detailed above gMSA is preferred, but it is also supported to use this security model. The sample *ClusterConfig.Windows.MultiMachine.JSON* configuration file downloaded with the [Microsoft.Azure.ServiceFabric.WindowsServer.\<version>.zip](https://go.microsoft.com/fwlink/?LinkId=730690) standalone cluster package contains a template for configuring Windows security. Windows security is configured in the **Properties** section:
``` "security": {
storage Data Lake Storage Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-cli.md
ACL inheritance is already available for new child items that are created under
- A storage account that has hierarchical namespace enabled. Follow [these](create-data-lake-storage-account.md) instructions to create one. -- Azure CLI version `2.6.0` or higher.
+- Azure CLI version `2.14.0` or higher.
- One of the following security permissions:
storage Data Lake Storage Directory File Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-directory-file-acl-powershell.md
This example moves a directory named `my-directory` to a subdirectory of `my-dir
$filesystemName = "my-file-system" $dirname = "my-directory/" $dirname2 = "my-directory-2/my-subdirectory/"
-Move-AzDataLakeGen2Item -Context $ctx -FileSystem $filesystemName -Path $dirname1 -DestFileSystem $filesystemName -DestPath $dirname2
+Move-AzDataLakeGen2Item -Context $ctx -FileSystem $filesystemName -Path $dirname -DestFileSystem $filesystemName -DestPath $dirname2
``` ## Delete a directory
storage Data Lake Storage Use Databricks Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-use-databricks-spark.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
This tutorial uses flight data from the Bureau of Transportation Statistics to demonstrate how to perform an ETL operation. You must download this data to complete the tutorial.
-1. Go to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time).
+1. Go to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?gnoyr_VQ=FGJ).
2. Select the **Prezipped File** check box to select all data fields.
When they're no longer needed, delete the resource group and all related resourc
## Next steps > [!div class="nextstepaction"]
-> [Extract, transform, and load data using Apache Hive on Azure HDInsight](data-lake-storage-tutorial-extract-transform-load-hive.md)
+> [Extract, transform, and load data using Apache Hive on Azure HDInsight](data-lake-storage-tutorial-extract-transform-load-hive.md)
storage Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/network-routing-preference.md
Title: Network routing preference
description: Network routing preference enables you to specify how network traffic is routed to your account from clients over the internet. -+ Last updated 02/11/2021--++
storage Storage Auth Aad App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-auth-aad-app.md
Update the *appsettings.json* file with your own values, as follows:
"Domain": "<azure-ad-domain-name>.onmicrosoft.com", "TenantId": "<tenant-id>", "ClientId": "<client-id>",
- "ClientSecret": "<client-secret>"
+ "ClientSecret": "<client-secret>",
"ClientCertificates": [ ], "CallbackPath": "/signin-oidc"
https://<storage-account>.blob.core.windows.net/<container>/Blob1.txt
- [Microsoft identity platform](../../active-directory/develop/index.yml) - [Manage access rights to storage data with Azure RBAC](./storage-auth-aad-rbac-portal.md)-- [Authenticate access to blobs and queues with Azure Active Directory and managed identities for Azure Resources](storage-auth-aad-msi.md)
+- [Authenticate access to blobs and queues with Azure Active Directory and managed identities for Azure Resources](storage-auth-aad-msi.md)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks | Microsoft Docs description: Configure layered network security for your storage account using Azure Storage firewalls and Azure Virtual Network. -+ Last updated 03/05/2021
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-private-endpoints.md
Title: Use private endpoints
description: Overview of private endpoints for secure access to storage accounts from virtual networks. -+ Last updated 03/12/2020-+
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-migration-storsimple-8000.md
Your registered on-premises Windows Server instance must be ready and connected
:::row::: :::column:::
- [![Step-by-step guide and demo for how to securely expose Azure file shares directly to information workers and apps - click to play!](./media/storage-files-migration-storsimple-8000/azure-files-direct-access-video-placeholder.png)](https://youtu.be/a-Twfus0HWE)
+ <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-networking-overview.md
Networking configuration for Azure file shares is done on the Azure storage acco
We recommend reading [Planning for an Azure Files deployment](storage-files-planning.md) prior to reading this conceptual guide.
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps. The sections below provide links and additional context to the documentation referenced in the video.
+ :::column-end:::
+ ## Accessing your Azure file shares When you deploy an Azure file share within a storage account, your file share is immediately accessible via the storage account's public endpoint. This means that authenticated requests, such as requests authorized by a user's logon identity, can originate securely from inside or outside of Azure.
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/analytics/partner-overview.md
Title: Storage Big Data and Analytics partners -
-description: List of Microsoft partners building customer solutions for Big Data and Analytics with Azure Storage
-keywords: Storage, Blob, Analytics, Big Data
+ Title: Big data and analytics partners
+
+description: List of Microsoft partner companies that build customer solutions for big data and analytics with Azure Storage
+ Previously updated : 12/11/2020- Last updated : 03/15/2021+ +
-# Azure Storage Analytics and Big Data partners
+# Azure Storage analytics and big data partners
+
+This article highlights Microsoft partner companies that are integrated with Azure Data Lake Storage. These partner solutions cover workloads like modern data warehouse workloads, advanced analytics, and real-time analytics. These partners take advantage of the [hierarchical namespace](../../../blobs/data-lake-storage-namespace.md) in Azure Storage to optimize their solution and run it efficiently in Azure.
-This article highlights Microsoft partner companies that are integrated with Azure Data Lake Storage. They cover workloads including Modern Data Warehouse, Advanced Analytics, Real-time analytics, etc. These partners are taking advantage of the hierarchical name space in Azure Storage to optimize their solution and run it efficiently in Azure.
+## Verified partners
-## Verified Analytics and Big Data partners
-| Partner | Description | Website/Product link |
+| Partner | Description | Website/product link |
| - | -- | -- |
-|![Dremio company logo](./media/dremio-logo.png) |**Dremio**<br>Analysts and data scientists can discover, explore, and curate data using DremioΓÇÖs intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real time. And analysts can create new reports that combine data between ADLS and other databases.|[Partner page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br>|
-![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality and governance of enterprise data on Azure. Our AI-powered, metadata-driven data integration, data quality and governance capabilities enables you to modernize analytics and accelerate your move to a data warehouse or data lake on Microsoft Azure|[Partner Page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)|
-![Wandisco company logo](./media/wandisco-logo.jpg) |**WANdisco**<br>Microsoft and WANdisco have teamed up to enable companies to put their Hadoop data to work in the cloud, taking advantage of next-gen machine learning-powered cloud analytics platforms for more robust business insights. WANdiscoΓÇÖs migration engine allows for data to be migrated while it remains in active useΓÇô at any scale, with zero down time and zero data loss.<br>Developed in partnership, WANdisco LiveData Platform for Azure automates deployment as an Azure Native Service, leveraging Role-based Access Control, Active Directory, Azure Policy enforcement, and Activity Log integration. With Azure Billing integration, there is no need to add a vendor contract or require additional vendor approvals.<br>Accelerate your replication of Hadoop data between multiple sources and targets, for any data architecture. Using LiveData Cloud Services means your data will be available for Azure Databricks, Synapse and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wandisco.ldm?tab=Overview)|
+|![Dremio company logo](./media/dremio-logo.png) |**Dremio**<br>Analysts and data scientists can discover, explore, and curate data, while your information technology (IT) department maintains governance and security. Dremio makes it easy to join Data Lake Storage with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on Data Lake Storage, immediately access that data in Power BI, create visualizations, and iteratively refine reports in real time. Analysts can also create new reports that combine data between Data Lake Storage and other databases.|[Partner page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br>|
+![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner Page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)|
+![Wandisco company logo](./medi) is tightly integrated with Azure. Besides having an Azure portal deployment experience, it also leverages role-based access control, Azure Active Directory, Azure Policy enforcement, and Activity log integration. With Azure Billing integration, you don't need to add a vendor contract or get additional vendor approvals.<br><br>Accelerate the replication of Hadoop data between multiple sources and targets for any data architecture. With LiveData Cloud Services, your data will be available for Azure Databricks, Synapse Analytics, and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wandisco.ldm?tab=Overview)|
## Next steps
-To learn more about some of our other partners, see [Archive, Backup and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md), [Container Solution partners](..\container-solutions\partner-overview.md), [Data Management and Migration partners](..\data-management\partner-overview.md), and also [Primary and Secondary Storage partners](..\primary-secondary-storage\partner-overview.md).
+To learn more about some of our other partners, see:
+- [Archive, backup, and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md)
+- [Container solution partners](..\container-solutions\partner-overview.md)
+- [Data management and migration partners](..\data-management\partner-overview.md)
+- [Primary and secondary storage partners](..\primary-secondary-storage\partner-overview.md)
storage Commvault Solution Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/commvault/commvault-solution-guide.md
Title: Back up your data to Azure with Commvault-
-description: Web page provides an overview of factors to consider and steps to follow to leverage Azure as a storage target and recovery location for Commvault Complete Backup and Recovery
-keywords: Commvault, Backup to Cloud, Backup, Backup to Azure, Disaster Recovery, Business Continuity
+
+description: Provides an overview of factors to consider and steps to follow to use Azure as a storage target and recovery location for Commvault Complete Backup and Recovery
+ Previously updated : 11/11/2020- Last updated : 03/15/2021+ -+
-# Back up to Azure with Commvault
+# Backup to Azure with Commvault
+
+This article helps you integrate a Commvault infrastructure with Azure Blob storage. It includes prerequisites, considerations, implementation, and operational guidance. This article addresses using Azure as an offsite backup target and a recovery site if a disaster occurs, which prevents normal operation within your primary site.
+
+> [!NOTE]
+> Commvault offers a lower recovery time objective (RTO) solution, Commvault Live Sync. This solution lets you have a standby VM that can help you recover more quickly in the event of a disaster in an Azure production environment. These capabilities are outside the scope of this document.
-This article provides a guide to integrating a Commvault infrastructure with Azure Blob Storage. It includes pre-requisites, Azure Storage principles, implementation, and operational guidance. This article only addresses using Azure as an offsite Backup target and a recovery site in the event of a disaster, which prevents normal operation within your primary site. Commvault also offers a lower RTO solution, Commvault Live Sync, as a means to have a standby VM ready to boot and recover more quickly in the event of a disaster and protection of resources within an Azure Production environment. These capabilities are out of scope for this document.
+## Reference architecture
-## Reference architecture for on-premises to Azure and In-Azure deployments
+The following diagram provides a reference architecture for on-premises to Azure and in-Azure deployments.
![Commvault to Azure Reference Architecture](../media/commvault-diagram.png)
-Your existing Commvault deployment can easily integrate with Azure by adding an Azure Storage Account, or multiple accounts, as a Cloud Storage target. Commvault also allows you to recover backups from on-premises within Azure giving you a recovery-on-demand site in Azure.
+Your existing Commvault deployment can easily integrate with Azure by adding an Azure storage account, or multiple accounts, as a cloud storage target. Commvault also allows you to recover backups from on-premises within Azure giving you a recovery-on-demand site in Azure.
## Commvault interoperability matrix
-| Workload | GPv2 and Blob Storage | Cool Tier support | Archive Tier support | Data Box Family Support |
+
+| Workload | GPv2 and Blob storage | Cool tier support | Archive tier support | Data Box Family support |
|--|--|--|-|-| | On-premises VMs/data | v11.5 | v11.5 | v11.10 | v11.10 |
-| Azure VMs | v11.5 | v11.5 | v11.5 | NA |
-| Azure Blob | v11.6 | v11.6 | v11.6 | NA |
-| Azure Files | v11.6 | v11.6 | v11.6 | NA |
+| Azure VMs | v11.5 | v11.5 | v11.5 | N/A |
+| Azure Blob | v11.6 | v11.6 | v11.6 | N/A |
+| Azure Files | v11.6 | v11.6 | v11.6 | N/A |
## Before you begin
-A little upfront planning will make sure you join the ranks of the many, many happy customers using Azure as an offsite backup target and recovery site.
+A little upfront planning will help you use Azure as an offsite backup target and recovery site.
-### Are you new to Azure?
+### Get started with Azure
-Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](https://docs.microsoft.com/azure/architecture/cloud-adoption/) \(CAF\) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade Cloud Adoption. The CAF includes a step-by-step [Azure Setup Guide](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/) for those new to Azure to help you get up and running quickly and securely and you can find an interactive version in the [Azure Portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You will find sample architectures and specific best practices for deploying applications and free training resources to put you on the path to Azure expertise.
+Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](https://docs.microsoft.com/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade cloud adoption. The CAF includes a step-by-step [Azure setup guide](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and running quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications, and free training resources to put you on the path to Azure expertise.
### Consider the network between your location and Azure
-Whether leveraging Cloud resources to run Production, Test and Development, or as a Backup target and Recovery site it is important to understand your bandwidth needs for initial backup seeding and for on-going day-to-day transfers.
+Whether using cloud resources to run production, test and development, or as a backup target and recovery site, it's important to understand your bandwidth needs for initial backup seeding and for ongoing day-to-day transfers.
+
+Azure Data Box provides a way to transfer your initial backup baseline to Azure without requiring more bandwidth. This is useful if the baseline transfer is estimated to take longer than you can tolerate. You can use the Data Transfer estimator when you create a storage account to estimate the time required to transfer your initial backup.
-Azure Data Box provides a means to transfer your initial backup baseline to Azure without requiring additional bandwidth if the baseline transfer is estimated to take longer than you can tolerate. You can leverage the Data Transfer estimator when you create a storage account to estimate the time required to transfer your initial backup.
+![Shows the Azure Storage data transfer estimator in the portal.](../media/az-storage-transfer.png)
-![Azure Storage Data Transfer Estimator](../media/az-storage-transfer.png)
+Remember, you'll require enough network capacity to support daily data transfers within the required transfer window (backup window) without impacting production applications. This section outlines the tools and techniques that are available to assess your network needs.
-Remember, you will require enough network capacity to support daily data transfers within the required transfer window (Backup window) without impacting Production applications. This section will outline the tools and techniques available to assess your network needs.
+#### Determine how much bandwidth you'll need
-#### How can you determine how much bandwidth you will need?
+To determine how much bandwidth you'll need, use the following resources:
-- Reports from your backup software.
- Commvault provides standard reports to determine [change rate](https://documentation.commvault.com/commvault/v11_sp19/article?p=39699.htm) and [total backup set size](https://documentation.commvault.com/commvault/v11_sp19/article?p=39621.htm) for the initial baseline transfer to Azure.
+- Reports from your backup software.
+- Commvault provides standard reports to determine [change rate](https://documentation.commvault.com/commvault/v11_sp19/article?p=39699.htm) and [total backup set size](https://documentation.commvault.com/commvault/v11_sp19/article?p=39621.htm) for the initial baseline transfer to Azure.
- Backup software-independent assessment and reporting tools like: - [MiTrend](https://mitrend.com/) - [Aptare](https://www.veritas.com/insights/aptare-it-analytics) - [Datavoss](https://www.datavoss.com/)
-#### How will I know how much headroom I have with my current Internet connection?
+#### Determine unutilized internet bandwidth
+
+It's important to know how much typically unutilized bandwidth (or *headroom*) you have available on a day-to-day basis. This helps you assess whether you can meet your goals for:
-It is important to know how much headroom, or typically unutilized, bandwidth you have available on a day-to-day basis. This will allow you to properly assess if you can meet your goals for initial time to upload, when not using Azure Data Box for offline seeding, and for completing daily backups based on the change rate identified above and your backup window. Below are methods you can use to identify the bandwidth headroom your backups to Azure are free to consume.
+- initial time to upload when you're not using Azure Data Box for offline seeding
+- completing daily backups based on the change rate identified earlier and your backup window
-- Are you an existing Azure ExpressRoute customer? View your [circuit usage](https://docs.microsoft.com/azure/expressroute/expressroute-monitoring-metrics-alerts#circuits-metrics) in the Azure portal.-- You can Contact your ISP. They should have reports to share with you illustrating your existing daily and monthly utilization.-- There are several tools that can measure utilization by monitoring your network traffic at your router/switch level including:
+Use the following methods to identify the bandwidth headroom that your backups to Azure are free to consume.
+
+- If you're an existing Azure ExpressRoute customer, view your [circuit usage](../../../../../expressroute/expressroute-monitoring-metrics-alerts.md#circuits-metrics) in the Azure portal.
+- Contact your ISP. They should be able to share reports that show your existing daily and monthly utilization.
+- There are several tools that can measure utilization by monitoring your network traffic at the router/switch level. These include:
- [Solarwinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS) - [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring) - [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html) - [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring)
-### Choosing the right Storage options
+### Choose the right storage options
-When using Azure as a backup target, customers make use of [Azure Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-blobs-introduction)\. Azure Blob storage is Microsoft's object storage solution. Blob storage is optimized for storing massive amounts of unstructured data, which is data that does not adhere to any data model or definition. Additionally, Azure Storage is durable, highly available, secure, and scalable. MicrosoftΓÇÖs platform offers up flexibility to select the right storage for the right workload in order to provide the [level of resiliency](https://docs.microsoft.com/azure/storage/common/storage-redundancy?toc=/azure/storage/blobs/toc.json) to meet your internal SLAs. Blob Storage is a pay-per-use service. You are [charged monthly](https://docs.microsoft.com/azure/storage/blobs/storage-blob-storage-tiers?tabs=azure-portal#pricing-and-billing) for the amount of data stored, accessing that data, and - in the case of Cool and Archive Tiers - a minimum required retention period. The resiliency and tiering options applicable to backup data are summarized in the tables below.
+When you use Azure as a backup target, you'll make use of [Azure Blob storage](../../../../blobs/storage-blobs-introduction.md). Blob storage is Microsoft's object storage solution. Blob storage is optimized for storing massive amounts of unstructured data, which is data that does not adhere to any data model or definition. Additionally, Azure Storage is durable, highly available, secure, and scalable. You can select the right storage for your workload to provide the [level of resiliency](../../../../common/storage-redundancy.md) to meet your internal SLAs. Blob storage is a pay-per-use service. You're [charged monthly](../../../../blobs/storage-blob-storage-tiers.md#pricing-and-billing) for the amount of data stored, accessing that data, and in the case of cool and archive tiers, a minimum required retention period. The resiliency and tiering options applicable to backup data are summarized in the following tables.
-**Azure Blob Storage resiliency options:**
+**Blob storage resiliency options:**
-| |Locally Redundant |Zone Redundant |Geographically Redundant |Geo Zone Redundant |
+| |Locally-redundant |Zone-redundant |Geo-redundant |Geo-zone-redundant |
||||||
-|Effective # of Copies | 3 | 3 | 6 | 6 |
-|# of Availability Zones | 1 | 3 | 2 | 4 |
-|# of Regions | 1 | 1 | 2 | 2 |
-|Manual Failover to Secondary Region | NA | NA | Yes | Yes |
+|**Effective # of copies** | 3 | 3 | 6 | 6 |
+|**# of availability zones** | 1 | 3 | 2 | 4 |
+|**# of regions** | 1 | 1 | 2 | 2 |
+|**Manual failover to secondary region** | N/A | N/A | Yes | Yes |
-**Azure Blob Storage tiers:**
+**Blob storage tiers:**
-| | Hot Tier |Cool Tier | Archive Tier |
+| | Hot tier |Cool tier | Archive tier |
| -- | -- | -- | -- |
-| Availability | 99.9% | 99% | Offline |
-| Usage Charges | Higher storage costs, Lower access, and transaction costs | Lower storage costs, higher access, and transaction costs | Lowest storage costs, highest access, and transaction costs |
-| Minimum Data Retention Required | NA | 30 days | 180 days |
-| Latency (Time to First Byte) | Milliseconds | Milliseconds | Hours |
-
-#### Sample Backup to Azure cost model
+| **Availability** | 99.9% | 99% | Offline |
+| **Usage charges** | Higher storage costs, Lower access, and transaction costs | Lower storage costs, higher access, and transaction costs | Lowest storage costs, highest access, and transaction costs |
+| **Minimum data retention required**| N/A | 30 days | 180 days |
+| **Latency (time to first byte)** | Milliseconds | Milliseconds | Hours |
-The concept of pay-per-use can be daunting to customers who are new to the Public Cloud. While you pay for only the capacity used, you do also pay for transactions (read and or writes) and [egress for data](https://azure.microsoft.com/pricing/details/bandwidth/) read back to your on-premises environment when [Azure Express Route Direct Local or Express Route Unlimited Data plan](https://azure.microsoft.com/pricing/details/expressroute/) are in use where data egress from Azure is included. You can perform what if analysis based on list pricing or with [Azure Storage Reserved Capacity pricing](https://docs.microsoft.com/azure/cost-management-billing/reservations/save-compute-costs-reservations), which can deliver up to 38% savings, in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). Here is an example pricing exercise to model the monthly cost of backing up to Azure, this is an example only and ***your pricing may vary due to activities not captured here:***
+#### Sample backup to Azure cost model
+With pay-per-use can be daunting to customers who are new to the cloud. While you pay for only the capacity used, you do also pay for transactions (read and or writes) and [egress for data](https://azure.microsoft.com/pricing/details/bandwidth/) read back to your on-premises environment when [Azure Express Route direct local or Express Route unlimited data plan](https://azure.microsoft.com/pricing/details/expressroute/) are in use where data egress from Azure is included. You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to perform "what if" analysis. You can base the analysis on list pricing or on [Azure Storage Reserved Capacity pricing](../../../../../cost-management-billing/reservations/save-compute-costs-reservations.md), which can deliver up to 38% savings. Here's an example pricing exercise to model the monthly cost of backing up to Azure. This is only an example. *Your pricing may vary due to activities not captured here.*
-|Cost Factor |Monthly Cost |
+|Cost factor |Monthly cost |
|||
-|100 TB of Backup Data on Cool Storage |$1556.48 |
+|100 TB of backup data on cool storage |$1556.48 |
|2 TB of new data written per day x 30 Days |$39 in transactions |
-|Monthly Estimated Total |$1595.48 |
+|Monthly estimated total |$1595.48 |
|||
-|One Time Restore of 5 TB to on-premises over Public Internet | $491.26 |
+|One time restore of 5 TB to on-premises over public internet | $491.26 |
+> [!NOTE]
+> This estimate was generated in the Azure Pricing Calculator using East US Pay-as-you-go pricing and is based on the Commvault default of 32 MB sub-chunk size which generates 65,536 PUT Requests (write transactions), per day. This example may not be applicable towards your requirements.
-> [!Note]
-This estimate was generated in the Azure Pricing Calculator using East US Pay-as-you-go pricing and is based on the Commvault default of 32MB sub-chunk size which generates 65,536 PUT Requests, aka write transactions, per day. This example may not be applicable towards your requirements.
+## Implementation guidance
-## Implementation and operational guidance
+This section provides a brief guide for how to add Azure Storage to an on-premises Commvault deployment. For detailed guidance and planning considerations, see the [Commvault Public Cloud Architecture guide for Microsoft Azure](https://documentation.commvault.com/commvault/v11/others/pdf/public-cloud-architecture-guide-for-microsoft-azure11-19.pdf).
-This section provides a brief guide to adding Azure Storage to an on-premises Commvault deployment. If you are interested in detailed guidance and planning considerations, we recommend reviewing the [Commvault Azure Architecture Guide](https://www.commvault.com/resources/public-cloud-architecture-guide-for-microsoft-azure-v11-sp16).
+1. Open the Azure portal, and search for **storage accounts**. You can also click on the default **Storage accounts** icon.
-1. Open the Azure portal, and search for "Storage Accounts" or click on the default services icon.
-
- 1. ![Azure Portal](../media/azure-portal.png)
+ ![Shows adding a storage accounts in the Azure portal.](../media/azure-portal.png)
- 1. ![Storage Accounts in the Azure Portal](../media/locate-storage-account.png)
+ ![Shows where you've typed storage in the search box of the Azure portal.](../media/locate-storage-account.png)
-2. Choose to Add an account, and select or create a Resource Group, provide a unique name, choose the region, select "Standard" Performance, always leave account kind as "Storage V2," choose the replication level, which meets your SLAs, and the default tier your backup software will leverage. An Azure Storage account makes Hot, Cool, and Archive tiers available within a single account and Commvault policies allow you to leverage multiple tiers to effectively manage the lifecycle of your data. Proceed to the next step.
+2. Select **Create** to add an account. Select or create a resource group, provide a unique name, choose the region, select **Standard** performance, always leave account kind as **Storage V2**, choose the replication level which meets your SLAs, and the default tier your backup software will apply. An Azure Storage account makes hot, cool, and archive tiers available within a single account and Commvault policies allow you to use multiple tiers to effectively manage the lifecycle of your data.
- ![Creating a Storage Account](../media/account-create-1.png)
+ ![Shows storage account settings in the portal](../media/account-create-1.png)
-3. Stick with the default networking options for now and move on to "Data Protection." Here, you can choose to enable "Soft Delete" which allows you to recover an accidentally deleted Backup file within the defined retention period and offers protection against accidental or malicious deletion.
-
- ![Creating a Storage Account Part 2](../media/account-create-2.png)
+3. Keep the default networking options for now and move on to **Data protection**. Here, you can choose to enable soft delete, which allows you to recover an accidentally deleted backup file within the defined retention period and offers protection against accidental or malicious deletion.
-4. Next, we recommend the default settings from the "Advanced" screen for Backup to Azure use cases.
+ ![Shows the Data Protection settings in the portal.](../media/account-create-2.png)
- ![Creating a Storage Account Part 3](../media/account-create-3.png)
+4. Next, we recommend the default settings from the **Advanced** screen for backup to Azure use cases.
-5. Add tags for organization if you leverage tagging and create your account. You now have petabytes of on-demand storage at your disposal!
+ ![Shows Advanced settings tab in the portal.](../media/account-create-3.png)
-6. Two quick steps are all that are now required before you can add the account to your Commvault environment. Navigate to the account you created in the Azure portal and select "Containers" under the "Blob Service" menu in the Portal blade. Add a new container and choose a meaningful name. Then, navigate to the "Access Keys" item under "Settings" and copy the "Storage account name" and one of the two access keys. You will need the Container name, Account Name, and Access Key in our next steps.
-
- ![Creating a Container](../media/container.png)
-
- ![Grab that Account Info](../media/access-key.png)
+5. Add tags for organization if you use tagging, and create your account.
-7. ***(Optional)*** You can add additional layers of security to your deployment.
-
- 1. Configure Role Based Access to limit who can make changes to your Storage Account. [Learn more here](https://docs.microsoft.com/azure/storage/common/authorization-resource-provider?toc=/azure/storage/blobs/toc.json)
- 1. Restrict access to the account to specific network segments with [Storage Firewall](https://docs.microsoft.com/azure/storage/common/storage-network-security?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal) to prevent access attempts from outside your corporate network.
+6. Two quick steps are all that are now required before you can add the account to your Commvault environment. Navigate to the account you created in the Azure portal and select **Containers** under the **Blob service** menu. Add a container and choose a meaningful name. Then, navigate to the **Access keys** item under **Settings** and copy the **Storage account name** and one of the two access keys. You'll need the container name, account name, and access key in the next steps.
- ![Storage Firewall](../media/storage-firewall.png)
+ ![Shows container creation in the portal.](../media/container.png)
- 1. Set a [Delete Lock](https://docs.microsoft.com/azure/azure-resource-manager/management/lock-resources) on the account to prevent accidental deletion of the Storage Account.
+ ![Shows access key settings in the portal.](../media/access-key.png)
- ![Resource Lock](../media/resource-lock.png)
-
- 1. Configure additional [security best practices](https://docs.microsoft.com/azure/storage/blobs/security-recommendations).
-
-1. In the Commvault Command Center, navigate to "Manage" --> "Security" --> "Credential Manager." Choose a "Cloud Account," "Vendor Type" of Microsoft Azure Storage, select the "MediaAgent", which will transfer data to and from Azure, add the Storage Account Name and Access Key.
-
- ![Commvault Credential](../media/commvault-credential.png)
+7. (*Optional*) You can add additional layers of security to your deployment.
-9. Next, navigate to "Storage" --> "Cloud" in Commvault Command Center. Choose to "Add." Enter a friendly name for the Storage Account and then select "Microsoft Azure Storage" from the "Type" list. Select a Media Agent server to be used to transfer backups to Azure Storage. Add the container you created, choose the Storage Tier to leverage within the Azure Storage account, and select the Credentials created in Step #8. Finally, choose whether or not to transfer deduplicated backups or not and a location for the deduplication database.
-
- ![Screenshot of the Add cloud user interface. In the Archive drop-down menu, **Archive** is selected.](../media/commvault-add-storage.png)
+ 1. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](../../../../common/authorization-resource-provider.md#built-in-roles-for-management-operations).
+ 1. Restrict access to the account to specific network segments with [storage firewall settings](../../../../common/storage-network-security.md) to prevent access attempts from outside your corporate network.
-10. Finally, add your new Azure Storage resource to an existing or new Plan in Commvault Command Center via "Manage" --> "Plans" as a "Backup Destination."
+ ![Shows storage firewall settings in the portal.](../media/storage-firewall.png)
- ![Screenshot of the COMMVAULT Command Center user interface. In the left navigation, under **Manage**, **Plans** is selected.](../media/commvault-plan.png)
+ 1. Set a [delete lock](../../../../../azure-resource-manager/management/lock-resources.md) on the account to prevent accidental deletion of the storage account.
-11. ***(Optional)*** If you plan to leverage Azure as a Recovery site or Commvault to migrate servers and applications to Azure, it is a best practice to deploy a VSA Proxy in Azure. You can find detailed instructions [here](https://documentation.commvault.com/commvault/v11/article?p=106208.htm).
+ ![Shows setting a delete lock in the portal.](../media/resource-lock.png)
-### Azure alerting and performance monitoring
+ 1. Configure additional [security best practices](../../../../../storage/blobs/security-recommendations.md).
-It is advisable to monitor both your Azure resources and Commvault's ability to leverage them as you would with any storage target you rely on to store your backups. A combination of Azure Monitor and Commvault Command Center monitoring will help you keep your environment healthy.
+1. In the Commvault Command Center, navigate to **Manage** -> **Security** -> **Credential Manager**. Choose a **Cloud Account**, **Vendor type** of **Microsoft Azure Storage**, select the **MediaAgent**, which will transfer data to and from Azure, add the storage account name and access key.
-#### Microsoft Azure portal
+ ![Shows adding credentials in Commvault Command Center.](../media/commvault-credential.png)
-Microsoft Azure provides a robust monitoring solution in the form of [Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/insights/monitor-azure-resource). You can [configure Azure Monitor](https://docs.microsoft.com/azure/storage/common/monitor-storage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-powershell#configuration) to track Azure Storage capacity, transactions, availability, authentication, and more. The full reference of metrics tracked may be found [here](https://docs.microsoft.com/azure/storage/common/monitor-storage-reference). A few useful metrics to track are BlobCapacity - to make sure you remain below the maximum [Storage Account Capacity limit](https://docs.microsoft.com/azure/storage/common/scalability-targets-standard-account), Ingress and Egress - to track the amount of data being written to and read from your Azure Storage account, and SuccessE2ELatency - to track the roundtrip time for requests to and from Azure Storage and your MediaAgent.
+9. Next, navigate to **Storage** -> **Cloud** in Commvault Command Center. Choose to **Add**. Enter a friendly name for the storage account and then select **Microsoft Azure Storage** from the **Type** list. Select a Media Agent server to be used to transfer backups to Azure Storage. Add the container you created, choose the storage tier to use within the Azure storage account, and select the credentials created in Step #8. Finally, choose whether or not to transfer deduplicated backups or not and a location for the deduplication database.
-You can also [create log alerts](https://docs.microsoft.com/azure/service-health/alerts-activity-log-service-notifications) to track Azure Storage service health and view the [Azure Status Dashboard](https://status.azure.com/status) at anytime.
+ ![Screenshot of Commvault's Add cloud user interface. In the Archive drop-down menu, **Archive** is selected.](../media/commvault-add-storage.png)
-#### Commvault Command Center
+10. Finally, add your new Azure Storage resource to an existing or new plan in Commvault Command Center via **Manage** -> **Plans** as a backup destination.
-[Creating alerts for cloud storage pools](https://documentation.commvault.com/commvault/v11/article?p=100514_3.htm)
+ ![Screenshot of the Commvault Command Center user interface. In the left navigation, under **Manage**, **Plans** is selected.](../media/commvault-plan.png)
-[Where can customers go to view performance reports, job completion and begin basic troubleshooting](https://documentation.commvault.com/commvault/v11/article?p=95306_1.htm)
+11. *(Optional)* If you plan to use Azure as a recovery site or Commvault to migrate servers and applications to Azure, it's a best practice to deploy a VSA Proxy in Azure. You can find detailed instructions [here](https://documentation.commvault.com/commvault/v11/article?p=106208.htm).
-### How to open support cases
+## Operational guidance
-When you need assistance with your Backup to Azure Solution, we recommend opening a case with both Commvault and Azure so our support organizations can engage collaboratively, if necessary.
+### Azure alerts and performance monitoring
-#### How to open a case with Commvault
+It is advisable to monitor both your Azure resources and Commvault's ability to leverage them as you would with any storage target you rely on to store your backups. A combination of Azure Monitor and Commvault Command Center monitoring will help you keep your environment healthy.
-Navigate to the [Commvault Support Site](https://www.commvault.com/support), Sign in, and open a case.
+#### Azure portal
-If you need to understand the support contract options available to you, please see [Commvault Support Options](https://ma.commvault.com/support)
+Azure provides a robust monitoring solution in the form of [Azure Monitor](../../../../../azure-monitor/essentials/monitor-azure-resource.md). You can [configure Azure Monitor](../../../../common/monitor-storage.md) to track Azure Storage capacity, transactions, availability, authentication, and more. You can find the full reference of metrics that are collected [here](../../../../blobs/monitor-blob-storage-reference.md). A few useful metrics to track are BlobCapacity - to make sure you remain below the maximum [storage account capacity limit](../../../../common/scalability-targets-standard-account.md), Ingress and Egress - to track the amount of data being written to and read from your Azure Storage account, and SuccessE2ELatency - to track the roundtrip time for requests to and from Azure Storage and your MediaAgent.
-You may also call in to open a case, or reach Commvault Support via email:
+You can also [create log alerts](../../../../../service-health/alerts-activity-log-service-notifications-portal.md) to track Azure Storage service health and view the [Azure status dashboard](https://status.azure.com/status) at any time.
-Toll Free: +1 877-780-3077
+#### Commvault Command Center
-[Worldwide Support Numbers](https://ma.commvault.com/Support/TelephoneSupport)
+- [Create an alert for cloud storage pools](https://documentation.commvault.com/commvault/v11/article?p=100514_3.htm)
+- For information about where you can view performance reports, job completion and begin basic troubleshooting, see [Dashboards](https://documentation.commvault.com/commvault/v11/article?p=95306_1.htm).
-[Email Commvault Support](mailto:support@commvault.com)
+### How to open support cases
-#### How to open a case with the Azure support team
+When you need help with your backup to Azure solution, you should open a case with both Commvault and Azure. This helps our support organizations to collaborate, if necessary.
-Within the [Azure portal](https://portal.azure.com) search for "Support" in the Search Bar at the top of the portal and choose "+ New Support Request"
-> [!Note]
-When opening a case, be specific that you need assistance with "Azure Storage" or "Azure Networking" and **NOT** "Azure Backup." Azure Backup is a Microsoft Azure native service and your case will be routed incorrectly.
+#### To open a case with Commvault
-### Links to relevant Commvault documentation
+On the [Commvault Support Site](https://www.commvault.com/support), sign in, and open a case.
-Commvault documentation providing further detail:
+To understand the support contract options available to you, see [Commvault support options](https://ma.commvault.com/support)
-[Commvault User Guide](https://documentation.commvault.com/commvault/v11/article?p=37684_1.htm)
+You may also call in to open a case, or reach Commvault Support via email:
-[Commvault Azure Architecture Guide](https://www.commvault.com/resources/public-cloud-architecture-guide-for-microsoft-azure-v11-sp16)
+- Toll free: +1 877-780-3077
+- [Worldwide Support numbers](https://ma.commvault.com/Support/TelephoneSupport)
+- [Email Commvault Support](mailto:support@commvault.com)
-### Link to Marketplace offering
+#### To open a case with Azure
-You can also continue to use the Commvault solution you know and trust to protect your workloads running on Azure. Commvault has made it easy to deploy their solution in Azure and protect Azure Virtual Machines and many other Azure Services.
+In the [Azure portal](https://portal.azure.com) search for **support** in the search bar at the top. Select **Help + support** -> **New Support Request**.
-[Deploy Commvault via the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/commvault.commvault?tab=Overview)
+> [!NOTE]
+> When you open a case, be specific that you need assistance with Azure Storage or Azure Networking. Do not specify Azure Backup. Azure Backup is the name of an Azure service and your case will be routed incorrectly.
-[Azure Datasheet](https://www.commvault.com/resources/microsoft-azure-cloud-platform-datasheet)
+### Links to relevant Commvault documentation
-[Comprehensive list of Azure Features and Services supported](https://documentation.commvault.com/commvault/v11/article?p=109795_1.htm)
+See the following Commvault documentation for further detail:
-[How to use Commvault to protect SAP HANA in Azure](https://azure.microsoft.com/resources/protecting-sap-hana-in-azure/)
+- [Commvault User Guide](https://documentation.commvault.com/commvault/v11/article?p=37684_1.htm)
+- [Commvault Azure Architecture Guide](https://www.commvault.com/resources/public-cloud-architecture-guide-for-microsoft-azure-v11-sp16)
-## Next steps
+### Marketplace offerings
-Explore additional resources on these external websites to get information about specialized usage scenarios:
+Commvault has made it easy to deploy their solution in Azure to protect Azure Virtual Machines and many other Azure services. For more information, see the following references:
-[Use Commvault to Migrate your servers and applications to Azure](https://www.commvault.com/resources/demonstration-vmware-to-azure-migrations-with-commvault)
+- [Deploy Commvault via the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/commvault.commvault?tab=Overview)
+- [Commvault for Azure datasheet](https://www.commvault.com/resources/microsoft-azure-cloud-platform-datasheet)
+- [Commvault's list of supported Azure features and services](https://documentation.commvault.com/commvault/v11/article?p=109795_1.htm)
+- [How to use Commvault to protect SAP HANA in Azure](https://azure.microsoft.com/resources/protecting-sap-hana-in-azure/)
+
+## Next steps
-[Protect SAP in Azure with Commvault](https://www.youtube.com/watch?v=4ZGGE53mGVI)
+See these additional Commvault resources for information about specialized usage scenarios.
-[Protect Office365 with Commvault](https://www.youtube.com/watch?v=dl3nvAacxZU)
+- [Use Commvault to Migrate your servers and applications to Azure](https://www.commvault.com/resources/demonstration-vmware-to-azure-migrations-with-commvault)
+- [Protect SAP in Azure with Commvault](https://www.youtube.com/watch?v=4ZGGE53mGVI)
+- [Protect Office365 with Commvault](https://www.youtube.com/watch?v=dl3nvAacxZU)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/partner-overview.md
Title: Storage archive, backup, and disaster recovery partners -
-description: List of microsoft partners building customer solutions for archive, backup and bcdr with Azure Storage
-keywords: Storage, Blob, bcdr, backup, desaster recovery, archive
+
+description: List of Microsoft partner companies that build customer solutions for archive, backup and BCDR with Azure Storage
+ Previously updated : 12/11/2020- Last updated : 03/15/2021+ +
-# Azure Storage Archive, Backup, and Disaster Recovery partners
+# Azure Storage archive, backup, and disaster recovery partners
+
+This article highlights Microsoft partners that are integrated with Azure Storage for archive, backup, and for business continuity and disaster recovery (BCDR) workloads. These partner solutions take advantage of the scale and cost benefits of Azure Storage. You can use the solutions to help solve backup challenges, to create a disaster recovery site, or to archive unused content for long-term retention. With all the compliance standards that Azure Storage meets, and with Azure features such as [immutable storage](../../../blobs/storage-blob-immutable-storage.md) and [lifecycle management](../../../blobs/storage-lifecycle-management-concepts.md), these solutions can easily replace tape-based backups, and offer an on-demand economical recovery site.
-This article highlights Microsoft partners that are integrated with Azure Storage for Archive and BCDR workloads. They take advantage of the scale and [cost benefits](https://docs.microsoft.com/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal) of Azure Storage to effectively solve backup challenges, create a disaster recovery site or archive unused content for long-term retention. With all the compliance standards that Azure Storage meets, and with [immutable storage](https://docs.microsoft.com/azure/storage/blobs/storage-blob-immutable-storage) and lifecycle management, these solutions can easily replace tape-based backups, and offer an on-demand economical recovery site.
+## Verified partners
-## Verified Archive, Backup, and Disaster Recovery partners
-| Partner | Description | Website/Product link |
+| Partner | Description | Website/product link |
| - | -- | -- |
-|![Commvault company logo](./medi)|
-|![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi is trusted by the worldΓÇÖs largest companies and public-sector institutions to optimize their unstructured storage environments. DobiProtect helps you keep a ΓÇÿgolden copyΓÇÖ of your most business-critical NAS data on Microsoft Azure to help protect against cyberthreats, ransomware, accidental deletions, and software vulnerabilities. Protect your business-critical unstructured data assets to an ΓÇÿair-gappedΓÇÖ golden copy in the cloud. Keep storage costs to a minimum by selecting just the data that will be needed when disaster strikes. And when disaster does occur, recover your data entirely, restore just a subset of data, or failover to your golden copy. |[Partner Page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobiprotect?tab=Overview)|
- ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology has been developing high-performance, secure, data management software solutions since 2004. Their customers are based in Media and Entertainment, Enterprise IT, Surveillance, and SMB/SME markets. Customers use Tiger solutions worldwide in over 120 countries. Tiger Technology enables organizations of any size and scale to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space and enables hybrid workflows. This transparent file server extension enables millions of Windows server users to benefit from Microsoft Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses a number of data management challenges including, File Server Extension, Disaster Recovery, Cloud Migration, Backup & Archive, Remote Collaboration, and Multi-site Sync as well as continuous Data Protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
-| ![Veeam company logo](./medi)|
-| ![Veritas company logo](./media/veritas-logo.png) |**Veritas**<br>Veritas Technologies enables businesses of all sizes to discover the truth in information, their most important digital asset. Using the Veritas platform, customers can accelerate their digital transformation and solve pressing IT and business challenges. They cover multi-cloud data management, data protection, storage optimization as well as compliance readiness and workload portability.<br>At the heart of our Enterprise Data Services Platform (EDSP) lies NetBackup, a unified data protection and recovery solution. NetBackup helps our customers standardize across their environment, greatly reducing complexity and risk regardless of workload or cloud. Enhanced by the strong partnership between Azure and NetBackup, customers experience push-button orchestrated disaster recovery, seamless workload and data portability, resiliency and mobility between Azure Stack environments, or between Azure regions. We enable organizations around the world to access, protect, gain insights from, and recover at scale their most important asset: data.<br>Veritas Backup Exec provides simple, rapid, and secure offsite backup to Azure for your in-house virtual and physical environments, and also protects cloud-based workloads in Azure.|[Partner Page](https://www.veritas.com/partners/microsoft-azure)<br>Azure Marketplace:<br>[NetBackup](https://azuremarketplace.microsoft.com/marketplace/apps/veritas.veritas-netbackup-8-s?tab=Overview)<br>[Backup Exec](https://azuremarketplace.microsoft.com/marketplace/apps/veritas.backup-exec-20?tab=Overview)||<br>|
+|![Commvault company logo](./medi)|
+|![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiProtect helps you keep a "golden copy" of your most business-critical network attached storage (NAS) data on Azure. This helps protect against cyberthreats, ransomware, accidental deletions, and software vulnerabilities. To keep storage costs to a minimum, select just the data that you'll need when disaster strikes. When disaster does occur, recover your data entirely, restore just a subset of data, or fail over to your golden copy. |[Partner Page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobiprotect?tab=Overview)|
+ ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
+| ![Veeam company logo](./medi)|
+| ![Veritas company logo](./media/veritas-logo.png) |**Veritas**<br>Veritas Technologies solutions cover multi-cloud data management, data protection, storage optimization as well as compliance readiness and workload portability.<br><br>NetBackup offers a unified data protection and recovery solution. NetBackup helps you standardize across your environment, greatly reducing complexity and risk regardless of workload or cloud. NetBackup offers push-button orchestrated disaster recovery, seamless workload and data portability, and resiliency and mobility between Azure Stack environments, or between Azure regions.<br><br>Veritas Backup Exec provides simple, rapid, and secure offsite backup to Azure for your in-house virtual and physical environments. It also protects cloud-based workloads in Azure.|[Partner Page](https://www.veritas.com/partners/microsoft-azure)<br>Azure Marketplace:<br>[NetBackup](https://azuremarketplace.microsoft.com/marketplace/apps/veritas.veritas-netbackup-8-s?tab=Overview)<br>[Backup Exec](https://azuremarketplace.microsoft.com/marketplace/apps/veritas.backup-exec-20?tab=Overview)|
## Next steps
-To learn more about some of our other partners, see [Analytics and Big Data partners](..\analytics\partner-overview.md), [Container Solution partners](..\container-solutions\partner-overview.md), [Data Management and Migration partners](..\data-management\partner-overview.md), and also [Primary and Secondary Storage partners](..\primary-secondary-storage\partner-overview.md).
-
+To learn more about some of our other partners, see:
+- [Analytics and big data partners](..\analytics\partner-overview.md)
+- [Container solution partners](..\container-solutions\partner-overview.md)
+- [Data management and migration partners](..\data-management\partner-overview.md)
+- [Primary and secondary storage partners](..\primary-secondary-storage\partner-overview.md).
storage Veeam Solution Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/veeam/veeam-solution-guide.md
Title: Back up your data to Azure with Veeam-
-description: Web page provides an overview of factors to consider and steps to follow to leverage Azure as a storage target and recovery location for Veeam Backup and Recovery
-keywords: Veeam,, Backup to Cloud, Backup, Backup to Azure, Disaster Recovery, Business Continuity
+
+description: Provides an overview of factors to consider and steps to follow to use Azure as a storage target and recovery location for Veeam Backup and Recovery
+ Previously updated : 11/11/2020- Last updated : 03/15/2021+ -+ # Backup to Azure with Veeam
-This article provides a guide to integrating a Veeam infrastructure with Azure Blob Storage. It includes pre-requisites, Azure Storage principles, implementation, and operational guidance. This article only addresses using Azure as an offsite Backup target and a recovery site in the event of a disaster, which prevents normal operation within your primary site. Veeam also offers a lower RTO solution, Veeam replication, as a means to have a standby VM ready to boot and recover more quickly in the event of a disaster and protection of resources within an Azure Production environment. Veeam does also dedicate tools to Backup Azure and Office 365 resources. These capabilities are out of scope for this document.
+This article helps you integrate a Veeam infrastructure with Azure Blob storage. It includes prerequisites, considerations, implementation, and operational guidance. This article addresses using Azure as an offsite backup target and a recovery site if a disaster occurs, which prevents normal operation within your primary site.
+
+> [!NOTE]
+> Veeam also offers a lower recovery time objective (RTO) solution, Veeam replication. This solution lets you have a standby VM that can help you recover more quickly in the event of a disaster in an Azure production environment. Veeam also has dedicated tools to back up Azure and Office 365 resources. These capabilities are outside the scope of this document.
-## Reference architecture for on-premises to Azure and In-Azure deployments
+## Reference architecture
-![Veeam to Azure Reference Architecture](../media/veeam-architecture.png)
+The following diagram provides a reference architecture for on-premises to Azure and in-Azure deployments.
-Your existing Veeam deployment can easily integrate with Azure by adding an Azure Storage Account, or multiple accounts, as a Cloud Backup Repository. Veeam also allows you to recover backups from on-premises within Azure giving you a recovery-on-demand site in Azure.
+![Veeam to Azure reference architecture diagram.](../media/veeam-architecture.png)
+
+Your existing Veeam deployment can easily integrate with Azure by adding an Azure storage account, or multiple accounts, as a cloud backup repository. Veeam also allows you to recover backups from on-premises within Azure giving you a recovery-on-demand site in Azure.
## Veeam interoperability matrix
-| Workload | GPv2 and Blob Storage | Cool Tier support | Archive Tier support | Data Box Family Support |
+
+| Workload | GPv2 and Blob Storage | Cool tier support | Archive tier support | Data Box Family support |
|--|--|--|-|-|
-| On-premises VMs/data | v10a | v10a | NA | 10a* |
-| Azure VMs | v10a | v10a | NA | 10a* |
-| Azure Blob | v10a | v10a | NA | 10a* |
-| Azure Files | v10a | v10a | NA | 10a* |
+| On-premises VMs/data | v10a | v10a | N/A | 10a<sup>*</sup> |
+| Azure VMs | v10a | v10a | N/A | 10a<sup>*</sup> |
+| Azure Blob | v10a | v10a | N/A | 10a<sup>*</sup> |
+| Azure Files | v10a | v10a | N/A | 10a<sup>*</sup> |
-> [!Note]
-Veeam Backup and Replication does support REST API only for Azure Data Box, therefore Azure Data Box Disk is not supported. Support for the Archive Tier of Azure Blob Storage is expected in Veeam v11.
+<sup>*</sup>Veeam Backup and Replication support REST API only for Azure Data Box. Therefore, Azure Data Box Disk is not supported.
## Before you begin
-A little upfront planning will make sure you join the ranks of the many, many happy customers using Azure as an offsite backup target and recovery site.
+A little upfront planning will help you use Azure as an offsite backup target and recovery site.
-### Are you new to Azure?
+### Get started with Azure
-Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](https://docs.microsoft.com/azure/architecture/cloud-adoption/) \(CAF\) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade Cloud Adoption. The CAF includes a step-by-step [Azure Setup Guide](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/) for those new to Azure to help you get up and running quickly and securely and you can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You will find sample architectures and specific best practices for deploying applications and free training resources to put you on the path to Azure expertise.
+Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](https://docs.microsoft.com/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade cloud adoption. The CAF includes a step-by-step [Azure Setup Guide](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and running quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications, and free training resources to put you on the path to Azure expertise.
### Consider the network between your location and Azure
-Whether leveraging Cloud resources to run Production, Test and Development, or as a Backup target and Recovery site it is important to understand your bandwidth needs for initial backup seeding and for on-going day-to-day transfers.
+Whether using cloud resources to run production, test and development, or as a backup target and recovery site, it's important to understand your bandwidth needs for initial backup seeding and for ongoing day-to-day transfers.
+
+Azure Data Box provides a way to transfer your initial backup baseline to Azure without requiring more bandwidth. This is useful if the baseline transfer is estimated to take longer than you can tolerate. You can use the Data Transfer estimator when you create a storage account to estimate the time required to transfer your initial backup.
+
+![Shows the Azure Storage data transfer estimator in the portal.](../media/az-storage-transfer.png)
-Azure Data Box provides a means to transfer your initial backup baseline to Azure without requiring additional bandwidth if the baseline transfer is estimated to take longer than you can tolerate. You can leverage the Data Transfer estimator when you create a storage account to estimate the time required to transfer your initial backup.
+Remember, you'll require enough network capacity to support daily data transfers within the required transfer window (backup window) without impacting production applications. This section outlines the tools and techniques that are available to assess your network needs.
-![Azure Storage Data Transfer Estimator](../media/az-storage-transfer.png)
+#### Determine how much bandwidth you'll need
-Remember, you will require enough network capacity to support daily data transfers within the required transfer window (Backup window) without impacting Production applications. This section will outline the tools and techniques available to assess your network needs.
+Multiple assessment options are available to determine change rate and total backup set size for the initial baseline transfer to Azure. Here are some examples of assessment and reporting tools:
-#### How can you determine how much bandwidth you will need?
+- [MiTrend](https://mitrend.com/)
+- [Apt are](https://www.veritas.com/insights/aptare-it-analytics)
+- [Datavoss](https://www.datavoss.com/)
-Multiple assessment options are available to determine change rate and total backup set size for the initial baseline transfer to Azure. Here are some examples of assessment and reporting tools like:
- - [MiTrend](https://mitrend.com/)
- - [Aptare](https://www.veritas.com/insights/aptare-it-analytics)
- - [Datavoss](https://www.datavoss.com/)
+#### Determine unutilized internet bandwidth
-#### How will I know how much headroom I have with my current Internet connection?
+It's important to know how much typically unutilized bandwidth (or *headroom*) you have available on a day-to-day basis. This helps you assess whether you can meet your goals for:
-It is important to know how much headroom, or typically unutilized, bandwidth you have available on a day-to-day basis. This will allow you to properly assess if you can meet your goals for initial time to upload, when not using Azure Data Box for offline seeding, and for completing daily backups based on the change rate identified above and your backup window. Below are methods you can use to identify the bandwidth headroom your backups to Azure are free to consume.
+- initial time to upload when you're not using Azure Data Box for offline seeding
+- completing daily backups based on the change rate identified earlier and your backup window
+
+Use the following methods to identify the bandwidth headroom that your backups to Azure are free to consume.
+
+- If you're an existing Azure ExpressRoute customer, view your [circuit usage](../../../../../expressroute/expressroute-monitoring-metrics-alerts.md#circuits-metrics) in the Azure portal.
+- Contact your ISP. They should be able to share reports that show your existing daily and monthly utilization.
+- There are several tools that can measure utilization by monitoring your network traffic at the router/switch level. These include:
-- Are you an existing Azure ExpressRoute customer? View your [circuit usage](https://docs.microsoft.com/azure/expressroute/expressroute-monitoring-metrics-alerts#circuits-metrics) in the Azure portal.-- You can Contact your ISP. They should have reports to share with you illustrating your existing daily and monthly utilization.-- There are several tools that can measure utilization by monitoring your network traffic at your router/switch level including: - [Solarwinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS) - [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring) - [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html) - [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring)
-### Choosing the right Storage options
+### Choose the right storage options
-When using Azure as a backup target, customers make use of [Azure Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-blobs-introduction)\. Azure Blob storage is Microsoft's object storage solution. Blob storage is optimized for storing massive amounts of unstructured data, which is data that does not adhere to any data model or definition. Additionally, Azure Storage is durable, highly available, secure, and scalable. MicrosoftΓÇÖs platform offers up flexibility to select the right storage for the right workload in order to provide the [level of resiliency](https://docs.microsoft.com/azure/storage/common/storage-redundancy?toc=/azure/storage/blobs/toc.json) to meet your internal SLAs. Blob Storage is a pay-per-use service. You are [charged monthly](https://docs.microsoft.com/azure/storage/blobs/storage-blob-storage-tiers?tabs=azure-portal#pricing-and-billing) for the amount of data stored, accessing that data, and - in the case of Cool and Archive Tiers - a minimum required retention period. The resiliency and tiering options applicable to backup data are summarized in the tables below.
+When you use Azure as a backup target, you'll make use of [Azure Blob storage](../../../../blobs/storage-blobs-introduction.md). Blob storage is Microsoft's object storage solution. Blob storage is optimized for storing massive amounts of unstructured data, which is data that does not adhere to any data model or definition. Additionally, Azure Storage is durable, highly available, secure, and scalable. You can select the right storage for your workload to provide the [level of resiliency](../../../../common/storage-redundancy.md) to meet your internal SLAs. Blob storage is a pay-per-use service. You're [charged monthly](../../../../blobs/storage-blob-storage-tiers.md#pricing-and-billing) for the amount of data stored, accessing that data, and in the case of cool and archive tiers, a minimum required retention period. The resiliency and tiering options applicable to backup data are summarized in the following tables.
-**Azure Blob Storage resiliency options:**
+**Blob storage resiliency options:**
-| |Locally Redundant |Zone Redundant |Geographically Redundant |Geo Zone Redundant |
+| |Locally-redundant |Zone-redundant |Geo-redundant |Geo-zone-redundant |
||||||
-|Effective # of Copies | 3 | 3 | 6 | 6 |
-|# of Availability Zones | 1 | 3 | 2 | 4 |
-|# of Regions | 1 | 1 | 2 | 2 |
-|Manual Failover to Secondary Region | NA | NA | Yes | Yes |
+|**Effective # of copies** | 3 | 3 | 6 | 6 |
+|**# of availability zones** | 1 | 3 | 2 | 4 |
+|**# of region**s | 1 | 1 | 2 | 2 |
+|**Manual failover to secondary region** | N/A | N/A | Yes | Yes |
-**Azure Blob Storage tiers:**
+**Blob storage tiers:**
-| | Hot Tier |Cool Tier | Archive Tier |
+| | Hot tier |Cool tier | Archive tier |
| -- | -- | -- | -- |
-| Availability | 99.9% | 99% | Offline |
-| Usage Charges | Higher storage costs, Lower access, and transaction costs | Lower storage costs, higher access, and transaction costs | Lowest storage costs, highest access, and transaction costs |
-| Minimum Data Retention Required | NA | 30 days | 180 days |
-| Latency (Time to First Byte) | Milliseconds | Milliseconds | Hours |
-
-#### Sample Backup to Azure cost model
+| **Availability** | 99.9% | 99% | Offline |
+| **Usage charges** | Higher storage costs, Lower access, and transaction costs | Lower storage costs, higher access, and transaction costs | Lowest storage costs, highest access, and transaction costs |
+| **Minimum data retention required** | NA | 30 days | 180 days |
+| **Latency (time to first byte)** | Milliseconds | Milliseconds | Hours |
-The concept of pay-per-use can be daunting to customers who are new to the Public Cloud. While you pay for only the capacity used, you do also pay for transactions (read and or writes) and [egress for data](https://azure.microsoft.com/pricing/details/bandwidth/) read back to your on-premises environment when [Azure Express Route Direct Local or Express Route Unlimited Data plan](https://azure.microsoft.com/pricing/details/expressroute/) are in use where data egress from Azure is included. You can perform what if analysis based on list pricing or with [Azure Storage Reserved Capacity pricing](https://docs.microsoft.com/azure/cost-management-billing/reservations/save-compute-costs-reservations), which can deliver up to 38% savings, in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). Here is an example pricing exercise to model the monthly cost of backing up to Azure, this is an example only and ***your pricing may vary due to activities not captured here:***
+#### Sample backup to Azure cost model
+With pay-per-use can be daunting to customers who are new to the cloud. While you pay for only the capacity used, you do also pay for transactions (read and or writes) and [egress for data](https://azure.microsoft.com/pricing/details/bandwidth/) read back to your on-premises environment when [Azure Express Route direct local or Express Route unlimited data plan](https://azure.microsoft.com/pricing/details/expressroute/) are in use where data egress from Azure is included. You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to perform "what if" analysis. You can base the analysis on list pricing or on [Azure Storage Reserved Capacity pricing](../../../../../cost-management-billing/reservations/save-compute-costs-reservations.md), which can deliver up to 38% savings. Here's an example pricing exercise to model the monthly cost of backing up to Azure. This is only an example. *Your pricing may vary due to activities not captured here.*
-|Cost Factor |Monthly Cost |
+|Cost factor |Monthly cost |
|||
-|100 TB of Backup Data on Cool Storage |$1556.48 |
-|2 TB of new data written per day x 30 Days |$72 in transactions |
-|Monthly Estimated Total |$1628.48 |
+|100 TB of backup data on cool storage |$1556.48 |
+|2 TB of new data written per day x 30 days |$72 in transactions |
+|Monthly estimated total |$1628.48 |
|||
-|One Time Restore of 5 TB to on-premises over Public Internet | $527.26 |
+|One time restore of 5 TB to on-premises over public internet | $527.26 |
> [!Note]
-This estimate was generated in the Azure Pricing Calculator using East US Pay-as-you-go pricing and is based on the Veeam default of 256kb chunk size for WAN transfers. This example may not be applicable towards your requirements.
+> This estimate was generated in the Azure Pricing Calculator using East US Pay-as-you-go pricing and is based on the Veeam default of 256 kb chunk size for WAN transfers. This example may not be applicable towards your requirements.
+
+## Implementation guidance
+
+This section provides a brief guide for how to add Azure Storage to an on-premises Veeam deployment. For detailed guidance and planning considerations, see the [Veeam Cloud Connect Backup Guide](https://helpcenter.veeam.com/docs/backup/cloud/cloud_backup.html?ver=100).
+
+1. Open the Azure portal, and search for **Storage Accounts**. You can also click on the default service icon.
-## Implementation and operational guidance
+ ![Shows adding a storage accounts in the Azure portal.](../media/azure-portal.png)
-This section provides a brief guide to adding Azure Storage to an on-premises Veeam deployment. If you are interested in detailed guidance and planning considerations, we recommend reviewing the [Veeam Cloud Connect Backup Guide](https://helpcenter.veeam.com/docs/backup/cloud/cloud_backup.html?ver=100).
+ ![Shows where you've typed storage in the search box of the Azure portal.](../media/locate-storage-account.png)
-1. Open the Azure portal, and search for "Storage Accounts" or click on the default services icon.
+2. Select **Create** to add an account. Select or create a resource group, provide a unique name, choose the region, select **Standard** performance, always leave account kind as **Storage V2**, choose the replication level which meets your SLAs, and the default tier your backup software will apply. An Azure Storage account makes hot, cool, and archive tiers available within a single account and Veeam policies allow you to use multiple tiers to effectively manage the lifecycle of your data.
- ![Azure Portal](../media/azure-portal.png)
+ ![Shows storage account settings in the portal](../media/account-create-1.png)
- ![Storage Accounts in the Azure Portal](../media/locate-storage-account.png)
+3. Keep the default networking options for now and move on to **Data protection**. Here, you can choose to enable soft delete, which allows you to recover an accidentally deleted backup file within the defined retention period and offers protection against accidental or malicious deletion.
-2. Choose to Add an account, and select or create a Resource Group, provide a unique name, choose the region, select "Standard" Performance, always leave account kind as "Storage V2," choose the replication level, which meets your SLAs, and the default tier your backup software will leverage. An Azure Storage account makes Hot, Cool, and Archive tiers available within a single account and Veeam policies allow you to leverage multiple tiers to effectively manage the lifecycle of your data. Proceed to the next step.
-
- ![Creating a Storage Account](../media/account-create-1.png)
+ ![Shows the Data Protection settings in the portal.](../media/account-create-2.png)
-3. Stick with the default networking options for now and move on to "Data Protection." Here, you can choose to enable "Soft Delete" which allows you to recover an accidentally deleted Backup file within the defined retention period and offers protection against accidental or malicious deletion.
- ![Creating a Storage Account Part 2](../media/account-create-2.png)
+4. Next, we recommend the default settings from the **Advanced** screen for backup to Azure use cases.
-4. Next, we recommend the default settings from the "Advanced" screen for Backup to Azure use cases.
+ ![Shows Advanced settings tab in the portal.](../media/account-create-3.png)
- ![Creating a Storage Account Part 3](../media/account-create-3.png)
+5. Add tags for organization if you use tagging, and create your account.
-5. Add tags for organization if you leverage tagging and create your account. You now have petabytes of on-demand storage at your disposal!
+6. Two quick steps are all that are now required before you can add the account to your Veeam environment. Navigate to the account you created in the Azure portal and select **Containers** under the **Blob service** menu. Add a container and choose a meaningful name. Then, navigate to the **Access keys** item under **Settings** and copy the **Storage account name** and one of the two access keys. You will need the container name, account name, and access key in the next steps.
-6. Two quick steps are all that are now required before you can add the account to your Veeam environment. Navigate to the account you created in the Azure Portal and select "Containers" under the "Blob Service" menu in the Portal blade. Add a new container and choose a meaningful name. Then, navigate to the "Access Keys" item under "Settings" and copy the "Storage account name" and one of the two access keys. You will need the Container name, Account Name, and Access Key in our next steps.
+ ![Shows container creation in the portal.](../media/container-b.png)
+
+ ![Shows access key settings in the portal.](../media/access-key.png)
- ![Creating a Container](../media/container-b.png)
-
- ![Grab that Account Info](../media/access-key.png)
-
> [!Note]
-Veeam Backup and Replication does offer additional options to connect to Azure. For the use case of this article, leveraging Microsoft Azure Blob Storage as a backup target, using above method is the recommended best practice.
+ > Veeam Backup and Replication offers additional options to connect to Azure. For the use case of this article, using Microsoft Azure Blob Storage as a backup target, using the above method is the recommended best practice.
-7. ***(Optional)*** You can add additional layers of security to your deployment.
+7. *(Optional)* You can add more layers of security to your deployment.
- 1. Configure Role Based Access to limit who can make changes to your Storage Account. [Learn more here](https://docs.microsoft.com/azure/storage/common/authorization-resource-provider?toc=/azure/storage/blobs/toc.json)
+ 1. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](../../../../common/authorization-resource-provider.md#built-in-roles-for-management-operations).
- 1. Restrict access to the account to specific network segments with [Storage Firewall](https://docs.microsoft.com/azure/storage/common/storage-network-security?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal) to prevent access attempts from outside your corporate network.
+ 1. Restrict access to the account to specific network segments with [storage firewall settings](../../../../common/storage-network-security.md) to prevent access attempts from outside your corporate network.
- ![Storage Firewall](../media/storage-firewall.png)
+ ![Shows storage firewall settings in the portal.](../media/storage-firewall.png)
- 1. Set a [Delete Lock](https://docs.microsoft.com/azure/azure-resource-manager/management/lock-resources) on the account to prevent accidental deletion of the Storage Account.
+ 1. Set a [delete lock](../../../../../azure-resource-manager/management/lock-resources.md) on the account to prevent accidental deletion of the storage account.
- ![Resource Lock](../media/resource-lock.png)
- 1.) Configure additional [security best practices](https://docs.microsoft.com/azure/storage/blobs/security-recommendations).
-8. In the Veaam Backup and Replication Management Console, navigate to "Backup Infrastructure" --> right click in the overview pane and select "Add Backup Repository" to open the configuration wizard. In the dialog box, select object storage --> Microsoft Azure Blob Storage --> Azure Blob Storage.
-
- ![Veeam Repository Wizard Screen a](../media/veeam-repo-a.png)
+ ![Resource Lock](../media/resource-lock.png)
- ![Veeam Repository Wizard Screen b](../media/veeam-repo-b.png)
+ 1. Configure additional [security best practices](../../../../../storage/blobs/security-recommendations.md).
- ![Veeam Repository Wizard Screen c](../media/veeam-repo-c.png)
+8. In the Veaam Backup and Replication Management Console, navigate to **Backup Infrastructure** -> right-click in the overview pane and select **Add Backup Repository** to open the configuration wizard. In the dialog box, select **Object storage** -> **Microsoft Azure Blob Storage** -> **Azure Blob Storage**.
-9. Next, specify a name and a description of your new Microsoft Azure Blob Repository.
-
- ![Veeam Repository Wizard Screen d](../media/veeam-repo-d.png)
+ ![Shows selecting object storage in the Veeam Repository Wizard.](../media/veeam-repo-a.png)
-10. In the next step, add the credentials to access your Azure Storage Account. Select "Microsoft Azure Storage Account" in the Cloud Credential Manager, enter your storage account name and access key. Select Azure Global in the region selector and any gateway server if applicable.
-
- ![Veeam Repository Wizard Screen e](../media/veeam-repo-e.png)
+ ![Shows selecting Microsoft Azure Blob Storage in the Veeam Repository Wizard.](../media/veeam-repo-b.png)
-> [!Note]
-If you choose not to use a Veeam gateway server, make sure that all scale-out repository extents have direct internet access.
+ ![Shows selecting Azure Blob Storage in the Veeam Repository Wizard.](../media/veeam-repo-c.png)
-11. On the container register, select your Azure Storage Container and select or create a folder to store your Backups in. You can also define a soft limit on the overall storage capacity to be used by Veeam (recommended). Review the displayed information in the summary section and complete the configuration tool. The new repository can then be selected in your backup job configuration.
+9. Next, specify a name and a description of your new Blob storage repository.
- ![Veeam Repository Wizard Screen f](../media/veeam-repo-f.png)
-
- ![Veeam Repository Wizard Screen g](../media/veeam-repo-g.png)
+ ![Shows typing a name for the repository in the Veeam Repository Wizard.](../media/veeam-repo-d.png)
-### Azure alerting and performance monitoring
+10. In the next step, add the credentials to access your Azure storage account. Select **Microsoft Azure Storage Account** in the Cloud Credential Manager, and enter your storage account name and access key. Select **Azure Global** in the region selector, and any gateway server if applicable.
-It is advisable to monitor both your Azure resources and Veeam's ability to leverage them as you would with any storage target you rely on to store your backups. A combination of Azure Monitor and Veeam's monitoring capabilities (i.e. the statistics tab in the jobs node of the Veeam Management Console or more advanced options like Veeam One Reporter) will help you keep your environment healthy.
+ ![Shows specifying an account in the Veeam Repository Wizard.](../media/veeam-repo-e.png)
-#### Microsoft Azure portal
+ > [!Note]
+ > If you choose not to use a Veeam gateway server, make sure that all scale-out repository extents have direct internet access.
-Microsoft Azure provides a robust monitoring solution in the form of [Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/insights/monitor-azure-resource). You can [configure Azure Monitor](https://docs.microsoft.com/azure/storage/common/monitor-storage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-powershell#configuration) to track Azure Storage capacity, transactions, availability, authentication, and more. The full reference of metrics tracked may be found [here](https://docs.microsoft.com/azure/storage/common/monitor-storage-reference). A few useful metrics to track are BlobCapacity - to make sure you remain below the maximum [Storage Account Capacity limit](https://docs.microsoft.com/azure/storage/common/scalability-targets-standard-account), Ingress, and Egress - to track the amount of data being written to and read from your Azure Storage account, and SuccessE2ELatency - to track the roundtrip time for requests to and from Azure Storage and your MediaAgent.
+11. On the container register, select your Azure Storage container and select or create a folder to store your backups in. You can also define a soft limit on the overall storage capacity to be used by Veeam, which is recommended. Review the displayed information in the summary section and complete the configuration tool. You can now select the new repository in your backup job configuration.
-You can also [create log alerts](https://docs.microsoft.com/azure/service-health/alerts-activity-log-service-notifications) to track Azure Storage service health and view the [Azure Status Dashboard](https://status.azure.com/status) at anytime.
+ ![Shows specifying a container in the Veeam Repository Wizard.](../media/veeam-repo-f.png)
-#### Veeam reporting
+ ![Shows creating a folder in the Veeam Repository Wizard.](../media/veeam-repo-g.png)
-[Configuring Veeam One Reporting](https://helpcenter.veeam.com/docs/one/reporter/configure_reporter.html?ver=100)
+## Operational guidance
-[Veeam Backup and Replication Alarms](https://helpcenter.veeam.com/docs/one/monitor/backup_alarms.html?ver=100)
+### Azure alerts and performance monitoring
-### How to open support cases
+It is advisable to monitor both your Azure resources and Veeam's ability to leverage them as you would with any storage target you rely on to store your backups. A combination of Azure Monitor and Veeam's monitoring capabilities (the **Statistics** tab in the **Jobs** node of the Veeam Management Console or more advanced options like Veeam One Reporter) will help you keep your environment healthy.
-When you need assistance with your Backup to Azure Solution, we recommend opening a case with both Veeam and Azure so our support organizations can engage collaboratively, if necessary.
+#### Azure portal
-#### How to open a case with Veeam
+Azure provides a robust monitoring solution in the form of [Azure Monitor](../../../../../azure-monitor/essentials/monitor-azure-resource.md). You can [configure Azure Monitor](../../../../common/monitor-storage.md) to track Azure Storage capacity, transactions, availability, authentication, and more. The full reference of metrics tracked may be found [here](../../../../blobs/monitor-blob-storage-reference.md). A few useful metrics to track are BlobCapacity - to make sure you remain below the maximum [storage account capacity limit](../../../../common/scalability-targets-standard-account.md), Ingress and Egress - to track the amount of data being written to and read from your Azure storage account, and SuccessE2ELatency - to track the roundtrip time for requests to and from Azure Storage and your MediaAgent.
-Navigate to the [Veeam Customer Support Portal](https://www.veeam.com/support.html), Sign in, and open a case.
+You can also [create log alerts](../../../../../service-health/alerts-activity-log-service-notifications-portal.md) to track Azure Storage service health and view the [Azure status dashboard](https://status.azure.com/status) at any time.
-If you need to understand the support options available to you by Veeam, see [Veeam Customer Support Policy](https://www.veeam.com/veeam_software_support_policy_ds.pdf)
+#### Veeam reporting
-You may also call in to open a case:
+- [Configure Veeam One Reporting](https://helpcenter.veeam.com/docs/one/reporter/configure_reporter.html?ver=100)
+- [Veeam backup and replication alarms](https://helpcenter.veeam.com/docs/one/monitor/backup_alarms.html?ver=100)
-[Worldwide Support Numbers](https://www.veeam.com/contacts.html?ad=in-text-link#support-numbers)
+### How to open support cases
-#### How to open a case with the Azure support team
+When you need help with your backup to Azure solution, you should open a case with both Veeam and Azure. This helps our support organizations to collaborate, if necessary.
-Within the [Azure portal](https://portal.azure.com) search for "Support" in the Search Bar at the top of the portal and choose "+ New Support Request"
-> [!Note]
-When opening a case, please be specific that you need assistance with "Azure Storage" or "Azure Networking" and **NOT** "Azure Backup." Azure Backup is a Microsoft Azure native service and your case will be routed incorrectly.
+#### To open a case with Veeam
-### Links to relevant Veeam documentation
+On the [Veeam customer support site](https://www.veeam.com/support.html), sign in, and open a case.
+
+To understand the support options available to you by Veeam, see the [Veeam Customer Support Policy](https://www.veeam.com/veeam_software_support_policy_ds.pdf).
-Veeam documentation providing further detail:
+You may also call to open a case: [Worldwide support numbers](https://www.veeam.com/contacts.html?ad=in-text-link#support-numbers)
-[Veeam User Guide](https://helpcenter.veeam.com/docs/backup/hyperv/overview.html?ver=100)
+#### To open a case with Azure
-[Veeam Architecture Guide](https://helpcenter.veeam.com/docs/backup/vsphere/backup_architecture.html?ver=100)
+In the [Azure portal](https://portal.azure.com) search for **support** in the search bar at the top. Select **Help + support** -> **New Support Request**.
-### Link to Marketplace Offering
+> [!NOTE]
+> When you open a case, be specific that you need assistance with Azure Storage or Azure Networking. Do not specify Azure Backup. Azure Backup is the name of an Azure service and your case will be routed incorrectly.
-You can also continue to use the Veeam solution you know and trust to protect your workloads running on Azure. Veeam has made it easy to deploy their solution in Azure and protect Azure Virtual Machines and many other Azure Services.
+### Links to relevant Veeam documentation
-[Deploy Veeam B&R via the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.veeam-backup-replication?tab=overview)
+See the following Veeam documentation for further detail:
-[Azure Datasheet](https://www.veeam.com/backup-azure.html?ad=menu-products)
+- [Veeam User Guide](https://helpcenter.veeam.com/docs/backup/hyperv/overview.html?ver=100)
+- [Veeam Architecture Guide](https://helpcenter.veeam.com/docs/backup/vsphere/backup_architecture.html?ver=100)
+### Marketplace offerings
-## Next steps
+You can continue to use the Veeam solution you know and trust to protect your workloads running on Azure. Veeam has made it easy to deploy their solution in Azure and protect Azure Virtual Machines and many other Azure services.
-Explore additional resources on these external websites to get information about specialized usage scenarios:
+- [Deploy Veeam Backup & Replication via the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.veeam-backup-replication?tab=overview)
+- [Veeam's Azure backup and recovery website](https://www.veeam.com/backup-azure.html?ad=menu-products)
-[Veeam How to Videos](https://www.veeam.com/how-to-videos.html?ad=menu-resources)
+## Next steps
-[Veeam Technical Documentations](https://www.veeam.com/documentation-guides-datasheets.html?ad=menu-resources)
+See the following resources on the Veeam website for information about specialized usage scenarios:
-[Veeam Knowledge Base and FAQ](https://www.veeam.com/knowledge-base.html?ad=menu-resources)
+- [Veeam How to Videos](https://www.veeam.com/how-to-videos.html?ad=menu-resources)
+- [Veeam Technical Documentations](https://www.veeam.com/documentation-guides-datasheets.html?ad=menu-resources)
+- [Veeam Knowledge Base and FAQ](https://www.veeam.com/knowledge-base.html?ad=menu-resources)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/container-solutions/partner-overview.md
Title: Storage container solution partners-
-description: List of industry system integrators building customer solutions for container with Azure Storage
-keywords: Storage, Blob, container
+
+description: List of Microsoft partner companies that build customer solutions for containers with Azure Storage
+ Previously updated : 12/11/2020- Last updated : 03/15/2021+ + # Azure Storage container management partners This article highlights Microsoft partner solutions that enable automation, data protection, and storage management of container-based solutions at scale.
-## Verified Container Management partners
-| Partner | Description | Website/Product link |
+## Verified partners
+
+| Partner | Description | Website/product link |
| - | -- | -- |
-| ![Kasten company logo](./media/kasten-logo.png) |**Kasten**<br>Kasten by Veeam is the leader in Kubernetes backup and disaster recovery. Kasten helps enterprises overcome Day 2 data management challenges to confidently run applications on Kubernetes.<br>The Kasten K10 data management software platform, provides enterprise operations teams a scalable, and secure system for BCDR and mobility of Kubernetes applications.|[Partner page](https://docs.kasten.io/latest/install/azure/azure.html)|
-| ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage is the Kubernetes Data Services Platform. Enterprises trust to run mission-critical applications in containers in production. Only Portworx provides a solution for persistent storage, BCDR, data security, cross-cloud and data migrations for applications running integrated on Kubernetes. As a result, Portworx is the #1 most used Kubernetes data services platform by Global 2000 companies. Based in Los Altos, Calif., the company was also named the Leader in the 2020 GigaOm Radar for Data Storage for Kubernetes report. |[Partner page](https://portworx.com/azure/)|
-| ![<n/>Robin.io company logo](./media/robin-logo.png) |**<n/>Robin.io**<br><n/>Robin.io provides an application and data management platform that enables enterprises and 5G service providers to deliver complex application pipelines as a service. Built on industry-standard Kubernetes, Robin allows developers and platform engineers to rapidly deploy and easily manage data- and network-centric applicationsΓÇöincluding big data, NoSQL, and 5GΓÇöindependent of underlying infrastructure resources. <n/>Robin.io technology is used globally by companies including BNP Paribas, Palo Alto Networks, Rakuten Mobile, SAP, Sabre, and USAA. <n/>Robin.io is headquartered in Silicon Valley, California <br> Robin Cloud Native Storage (CNS) brings advanced data management capabilities to Microsoft Azure Kubernetes Service (AKS), MicrosoftΓÇÖs fully managed Kubernetes service. Robin CNS seamlessly integrates with Azure Disk Storage to simplify management of stateful applications. Developers and DevOps teams can deploy Robin CNS as a standard Kubernetes operator on AKS. Robin Cloud Native Storage helps simplify data management operations such as BCDR and cloning of entire applications. |[Partner page](https://robin.io/robin-cloud-native-storage-for-microsoft-aks/)|<br>|
-|<br>|
+| ![Kasten company logo](./media/kasten-logo.png) |**Kasten**<br>Kasten by Veeam provides a solution for Kubernetes backup and disaster recovery. Kasten helps enterprises overcome Day 2 data management challenges to confidently run applications on Kubernetes.<br><br>The Kasten K10 data management software platform provides enterprise operations teams a scalable and secure system for BCDR and mobility of Kubernetes applications.|[Partner page](https://docs.kasten.io/latest/install/azure/azure.html)|
+| ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage provides a solution for persistent storage, BCDR, data security, cross-cloud, and data migrations for applications running integrated on Kubernetes.|[Partner page](https://portworx.com/azure/)|
+| ![<n/>Robin.io company logo](./media/robin-logo.png) |**<n/>Robin.io**<br>Robin.io provides an application and data management platform that enables enterprises and 5G service providers to deliver complex application pipelines as a service. Built on industry-standard Kubernetes, Robin allows developers and platform engineers to rapidly deploy and easily manage data- and network-centric applications, independent of underlying infrastructure resources.<br><br>Robin Cloud Native Storage (CNS) brings advanced data management capabilities to Azure Kubernetes Service. Robin CNS seamlessly integrates with Azure Disk Storage to simplify management of stateful applications. Developers and DevOps teams can deploy Robin CNS as a standard Kubernetes operator on AKS. Robin Cloud Native Storage helps simplify data management operations such as BCDR and cloning of entire applications. |[Partner page](https://robin.io/robin-cloud-native-storage-for-microsoft-aks/)|
## Next steps
-To learn more about some of our other partners, see [Analytics and Big Data partners](..\analytics\partner-overview.md), [Archive, Backup and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md), [Data Management and Migration partners](..\data-management\partner-overview.md), and also [Primary and Secondary Storage partners](..\primary-secondary-storage\partner-overview.md).
-
+To learn more about some of our other partners, see:
+- [Analytics and big data partners](..\analytics\partner-overview.md)
+- [Archive, backup, and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md)
+- [Data management and migration partners](..\data-management\partner-overview.md)
+- [Primary and secondary storage partners](..\primary-secondary-storage\partner-overview.md)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/data-management/partner-overview.md
Title: Storage data governance, management and migration partners -
-description: List of Microsoft partners building customer solutions for data governance, management and migration with Azure Storage
-keywords: Storage, Blob, data management
+ Title: Storage data governance, management, and migration partners
+
+description: List of Microsoft partner companies that build customer solutions for data governance, management, and migration with Azure Storage
+ Previously updated : 12/11/2020- Last updated : 03/15/2021+ +
-# Azure Storage Data Governance, Management, and Migration partners
+# Azure Storage data governance, management, and migration partners
+
+This article highlights Microsoft partner companies integrated with Azure Storage that can improve your overall data management capabilities. These partner solutions can support storage assessment and reporting, platform-agnostic migration, replication, cloud tiering, or data governance.
-This article highlights Microsoft partner companies integrated with Azure Storage that improve customerΓÇÖs overall data management capabilities. These partner solutions can support storage assessment and reporting, platform-agnostic migration, replication, cloud tiering, or data governance.
+## Verified partners
-## Verified Data Governance, Management, and Migration partners
-| Partner | Description | Website/Product link |
+| Partner | Description | Website/product link |
| - | -- | -- |
-|![Commvault company logo](./media/commvault-logo.jpg) |**Commvault**<br>Optimize, protect, migrate and index your data using Microsoft infrastructure with Commvault. Take control of your data with Commvault Complete Data Protection, the Microsoft-centric and Azure-centric data management solution. Commvault provides the tools you need to manage, migrate, access and recover your data no matter where it resides, while reducing cost and risk. Microsoft trusts Commvault to manage and protect some of its most important data and workloads and you can as well. |[Partner Page](https://www.commvault.com/complete-data-protection)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/commvault.commvault)|
-|![Data Dynamics company logo](./media/datadyn-logo.png) |**Data Dynamics**<br>Data Dynamics is a leading provider of enterprise solutions for managing unstructured data for hybrid and multi-cloud environments. Our Unified Unstructured Data Management Platform uses analytics and automation to help you intelligently and efficiently move data from heterogenous storage environments (SMB, NFS, or S3 Object) into Azure. Optimize your storage infrastructure by ensuring that you have the right data, in the right location at the right time. The platform provides seamless integration, enterprise scale, and performance that enables the efficient management of data for hybrid and multi-cloud environments. Use cases include: intelligent cloud migration, disaster recovery, archive, backup, and infrastructure optimization and data management. |[Partner page](https://www.datadynamicsinc.com/ms-azure-partner/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_2-preview?tab=Overview&flightCodes=18994ad6-20dc-4bdb-ae27-e7ef3263fa9e)|
-![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi is trusted by the worldΓÇÖs largest companies and public-sector institutions to optimize their unstructured storage environments. DobiMigrate is enterprise-class software that gets your file and object data ΓÇô safely, quickly, easily, and cost effectively ΓÇô to Microsoft Azure. Experience big-time savings across the board thanks to a fast and efficient transition to Azure. Say goodbye to time-consuming migration tasks and focus instead on value-added activities. Realize storage cost savings or run workloads in the cloud. Reduce storage management and cost overhead. Grow storage footprint without CAPEX investments.|[Partner Page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview)|
-![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality and, governance of enterprise data on Azure. Our AI-powered, metadata-driven data integration, data quality and governance capabilities enables you to modernize analytics and accelerate your move to a data warehouse or data lake on Microsoft Azure|[Partner Page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)|
-|![Komprise company logo](./media/komprise-logo.png) |**Komprise**<br>Komprise enables businesses to get unprecedented visibility across silos to manage file and object data and save costs. Komprise Intelligent Data Management software enables businesses to consistently analyze, move, and manage data across clouds.<br>Komprise enables businesses to analyze data growth across any NAS and Object storage to identify savings of 70%+. Komprise enables archiving of cold data to Microsoft Azure, and runs data migrations, transparent data archiving, and data replications to Azure Files and Azure Blob. Patented Komprise Transparent Move Technology enables files to be archived without changing user access. Global searching and tagging enables virtual data lakes for AI, Big Data, and ML applications. |[Partner page](https://www.komprise.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview) |
+|![Commvault company logo](./media/commvault-logo.jpg) |***Commvault**<br>Optimize, protect, migrate, and index your data using Microsoft infrastructure with Commvault. Take control of your data with Commvault Complete Data Protection, the Microsoft-centric and, Azure-centric data management solution. Commvault provides the tools you need to manage, migrate, access, and recover your data no matter where it resides, while reducing cost and risk.|[Partner Page](https://www.commvault.com/complete-data-protection)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/commvault.commvault)|
+|![Data Dynamics company logo](./media/datadyn-logo.png) |**Data Dynamics**<br>Data Dynamics provides enterprise solutions to manage unstructured data for hybrid and multi-cloud environments. Their Unified Unstructured Data Management Platform uses analytics and automation to help you intelligently and efficiently move data from heterogenous storage environments (SMB, NFS, or S3 Object) into Azure. The platform provides seamless integration, enterprise scale, and performance that enables the efficient management of data for hybrid and multi-cloud environments. Use cases include: intelligent cloud migration, disaster recovery, archive, backup, and infrastructure optimization and data management. |[Partner page](https://www.datadynamicsinc.com/ms-azure-partner/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_2-preview?tab=Overview&flightCodes=18994ad6-20dc-4bdb-ae27-e7ef3263fa9e)|
+![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiMigrate is enterprise-class software that gets your file and object data ΓÇô safely, quickly, easily, and cost effectively ΓÇô to Azure. Focus on value-added activities instead of time-consuming migration tasks. Grow your storage footprint without CAPEX investments.|[Partner Page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview)|
+![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner Page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)|
+|![Komprise company logo](./media/komprise-logo.png) |**Komprise**<br>Komprise enables visibility across silos to manage file and object data and save costs. Komprise Intelligent Data Management software lets you consistently analyze, move, and manage data across clouds.<br><br>Komprise helps you to analyze data growth across any network attached storage (NAS) and object storage to identify significant cost savings. You can also archive cold data to Azure, and runs data migrations, transparent data archiving, and data replications to Azure Files and Blob storage. Patented Komprise Transparent Move Technology enables you to archive files without changing user access. Global search and tagging enables virtual data lakes for AI, big data, and machine learning applications. |[Partner page](https://www.komprise.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview) |
+ ## Next steps
-To learn more about some of our other partners, see [Analytics and Big Data partners](..\analytics\partner-overview.md), [Archive, Backup and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md), [Container Solution partners](..\container-solutions\partner-overview.md), and also [Primary and Secondary Storage partners](..\primary-secondary-storage\partner-overview.md).
+To learn more about some of our other partners, see:
+
+- [Analytics and big data partners](..\analytics\partner-overview.md)
+- [Archive, backup, and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md)
+- [Container solution partners](..\container-solutions\partner-overview.md)
+- [Primary and secondary storage partners](..\primary-secondary-storage\partner-overview.md)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
Title: Storage partners for primary and secondary storage-
-description: List of Microsoft partners building customer solutions for primary and secondary storage solutions with Azure Storage
-keywords: Storage, Blob, primary, secondary
+
+description: Microsoft partners who build customer solutions for primary and secondary storage solutions with Azure Storage
+ Previously updated : 12/11/2020- Last updated : 03/15/2021+ +
-# Azure Storage partners for Primary and Secondary Storage
+# Azure Storage partners for primary and secondary storage
+
+This article highlights Microsoft partner companies that deliver a network attached storage (NAS) or storage area network (SAN) solution. The solution can be on-premises, in Azure, or a hybrid solution that uses Azure Storage as a cost-effective tier. These solutions can enable customers to use the same solution in any of their environments.
-This article highlights Microsoft partner companies that deliver a NAS or SAN solution, on-premises, in Azure or hybrid solution that uses Azure Storage as a cost-effective tier. These solutions can enable customers to use the same solution in any of their environments.
+## Verified partners
-## Verified Primary and Secondary Storage partners
-| Partner | Description | Website/Product link |
+| Partner | Description | Website/product link |
| - | -- | -- |
-| ![Nasuni](./media/nasuni-logo.png) |**Nasuni**<br>Nasuni is a file storage platform that replaces enterprise NAS and file servers including the associated infrastructure for BCDR and disk tiering. Our patented, SaaS platform brings a new generation of IT simplicity, unlimited capacity, and the lowest cost to enterprise file storage. Virtual edge appliances keep files quickly accessible and synchronized with the cloud. A robust management console enables IT to manage multiple storage sites from one location including the ability to provision, monitor, control, and report on your file infrastructure. Continuous versioning to the cloud makes backup a thing of the past and brings file restore times down to minutes.<br>Nasuni cloud file storage built on Microsoft Azure eliminates traditional Network Attached Storage (NAS) and file servers across any number of locations and replaces it with a cloud solution. Nasuni cloud file storage provides infinite file storage, backups, disaster recovery, and multi-site file sharing. Nasuni is a software-as-a-service used for data-center-to-the-cloud initiatives, multi-location file synching, sharing, and collaboration, as well as a cloud storage companion for VDI environments. Nasuni uniquely eliminates complex and costly on-premises storage hardware, eliminates the need for backup technology, and enables site recovery within 15 minutes, simplifies IT management, enables employees to work from anywhere at any time, and is a fast antidote to a ransomware attack. |[Partner page](https://www.nasuni.com/partner/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nasunicorporation.nasuni)|
-| ![Panzura](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS. This includes automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)|
-| ![Pure Storage](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds. One of the fastest-growing enterprise IT companies in history, Pure helps customers put data to use while reducing the complexity and expense of managing the infrastructure behind it. And with a certified customer satisfaction score in the top one percent of B2B companies, Pure's ever-expanding list of customers are among the happiest in the world. |[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.cbs_license_offer)|
-| ![Scality](./media/scality-logo.png) |**Scality**<br>Scality builds a market leading software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure as well as meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/scality.scalityconnecthourly?tab=Overview)|
-| ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology has been developing high-performance, secure, data management software solutions since 2004. Their customers are based in Media and Entertainment, Enterprise IT, Surveillance, and SMB/SME markets. Customers use Tiger solutions worldwide in over 120 countries. Tiger Technology enables organizations of any size and scale to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space and enables hybrid workflows. This transparent file server extension enables millions of Windows server users to benefit from Microsoft Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses a number of data management challenges including, File Server Extension, Disaster Recovery, Cloud Migration, Backup & Archive, Remote Collaboration, and Multi-site Sync as well as continuous Data Protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
-## Next steps
-To learn more about some of our other partners, see [Big Data and Analytics partners](..\analytics\partner-overview.md), [Archive, Backup and BCDR partners](..\backup-archive-disaster-recovery\partner-overview.md), [Container Solution partners](..\container-solutio