Updates from: 03/29/2021 03:06:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
The inline frame element `<iframe>` is used to embed a document in an HTML5 web
When using iframe, consider the following: - Embedded sign-in supports local accounts only. Most social identity providers (for example, Google and Facebook) block their sign-in pages from being rendered in inline frames.-- Because Azure AD B2C session cookies within an iframe are considered third-party cookies, certain browsers (for example Safari or Chrome in incognito mode) either block or clear these cookies, resulting in an undesirable user experience. To prevent this issue, make sure your application domain name and your Azure AD B2C domain have the *same origin*. To use the same origin, [enable custom domains](custom-domain.md) for Azure AD B2C tenant, then configure your web app with the same origin. For example, an application hosted on https://app.contoso.com has the same origin as Azure AD B2C running on https://login.contoso.com.
+- Because Azure AD B2C session cookies within an iframe are considered third-party cookies, certain browsers (for example Safari or Chrome in incognito mode) either block or clear these cookies, resulting in an undesirable user experience. To prevent this issue, make sure your application domain name and your Azure AD B2C domain have the *same origin*. To use the same origin, [enable custom domains](custom-domain.md) for Azure AD B2C tenant, then configure your web app with the same origin. For example, an application hosted on 'https://app.contoso.com' has the same origin as Azure AD B2C running on 'https://login.contoso.com'.
## Prerequisites
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
In this sample tutorial, we provide guidance on how to integrate [Microsoft Dynamics 365 Fraud Protection](https://docs.microsoft.com/dynamics365/fraud-protection/overview) (DFP) with the Azure Active Directory (AD) B2C.
-Microsoft DFP provides clients with the capability to assess if the risk of attempts to create new accounts and attempts to login to clientΓÇÖs ecosystem are fraudulent. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts. Account protection includes artificial intelligence empowered device fingerprinting, APIs for real-time risk assessment, rule and list experience to optimize risk strategy as per clientΓÇÖs business needs, and a scorecard to monitor fraud protection effectiveness and trends in clientΓÇÖs ecosystem.
+Microsoft DFP provides clients with the capability to assess if the risk of attempts to create new accounts and attempts to login to client's ecosystem are fraudulent. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts. Account protection includes artificial intelligence empowered device fingerprinting, APIs for real-time risk assessment, rule and list experience to optimize risk strategy as per client's business needs, and a scorecard to monitor fraud protection effectiveness and trends in client's ecosystem.
In this sample, we'll be integrating the account protection features of Microsoft DFP with an Azure AD B2C user flow. The service will externally fingerprint every sign-in or sign up attempt and watch for any past or present suspicious behavior. Azure AD B2C invokes a decision endpoint from Microsoft DFP, which returns a result based on all past and present behavior from the identified user, and also the custom rules specified within the Microsoft DFP service. Azure AD B2C makes an approval decision based on this result and passes the same back to Microsoft DFP.
The following architecture diagram shows the implementation.
### Deploy the Azure AD B2C API code
-Deploy the [provided API code](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Dynamics-Fraud-Protection/API) to an Azure service. The code can be [published from Visual Studio](/visualstudio/deployment/quickstart-deploy-to-azure?view=vs-2019).
+Deploy the [provided API code](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Dynamics-Fraud-Protection/API) to an Azure service. The code can be [published from Visual Studio](/visualstudio/deployment/quickstart-deploy-to-azure).
Set-up CORS, add **Allowed Origin** `https://{your_tenant_name}.b2clogin.com`
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
| Placeholder | Replace with | Notes | | :-- | :| :--|
-|{your_tenant_name} | Your tenant short name | ΓÇ£yourtenantΓÇ¥ from yourtenant.onmicrosoft.com |
+|{your_tenant_name} | Your tenant short name | "yourtenant" from yourtenant.onmicrosoft.com |
|{your_tenantId} | Tenant ID of your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef | | {your_tenant_IdentityExperienceFramework_appid} | App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef | | {your_tenant_ ProxyIdentityExperienceFramework _appid} | App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_extensions_appid} | App ID of your tenantΓÇÖs storage application | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_extensions_app_objectid} | Object ID of your tenantΓÇÖs storage application | 01234567-89ab-cdef-0123-456789abcdef |
+| {your_tenant_extensions_appid} | App ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef |
+| {your_tenant_extensions_app_objectid} | Object ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef |
| {your_app_insights_instrumentation_key} | Instrumentation key of your app insights instance* | 01234567-89ab-cdef-0123-456789abcdef | | {your_ui_base_url} | Endpoint in your app service from where your UI files are served | https://yourapp.azurewebsites.net/B2CUI/GetUIPage | | {your_app_service_url} | URL of your app service | https://yourapp.azurewebsites.net |
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-lifecycle.md
Here are examples of how you can use PowerShell cmdlets to configure the expirat
Remove-AzureADMSGroupLifecyclePolicy -Id "26fcc232-d1c3-4375-b68d-15c296f1f077" ```
-The following cmdlets can be used to configure the policy in more detail. For more information, see [PowerShell documentation](/powershell/module/azuread/?view=azureadps-2.0-preview#groups).
+The following cmdlets can be used to configure the policy in more detail. For more information, see [PowerShell documentation](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#groups).
- Get-AzureADMSGroupLifecyclePolicy - New-AzureADMSGroupLifecyclePolicy
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/facebook-federation.md
Now you'll set the Facebook client ID and client secret, either by entering it i
`New-AzureADMSIdentityProvider -Type Facebook -Name Facebook -ClientId [Client ID] -ClientSecret [Client secret]` > [!NOTE]
- > Use the client ID and client secret from the app you created above in the Facebook developer console. For more information, see the [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview) article.
+ > Use the client ID and client secret from the app you created above in the Facebook developer console. For more information, see the [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview&preserve-view=true) article.
## How do I remove Facebook federation? You can delete your Facebook federation setup. If you do so, any users who have signed up through user flows with their Facebook accounts will no longer be able to log in.
You can delete your Facebook federation setup. If you do so, any users who have
`Remove-AzureADMSIdentityProvider -Id Facebook-OAUTH` > [!NOTE]
- > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview).
+ > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview&preserve-view=true).
## Next steps
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
You'll now set the Google client ID and client secret. You can use the Azure por
`New-AzureADMSIdentityProvider -Type Google -Name Google -ClientId <client ID> -ClientSecret <client secret>` > [!NOTE]
- > Use the client ID and client secret from the app you created in "Step 1: Configure a Google developer project." For more information, see [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview).
+ > Use the client ID and client secret from the app you created in "Step 1: Configure a Google developer project." For more information, see [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview&preserve-view=true).
## How do I remove Google federation? You can delete your Google federation setup. If you do so, Google guest users who have already redeemed their invitation won't be able to sign in. But you can give them access to your resources again by deleting them from the directory and reinviting them.
You can delete your Google federation setup. If you do so, Google guest users wh
`Remove-AzureADMSIdentityProvider -Id Google-OAUTH` > [!NOTE]
- > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview).
+ > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview&preserve-view=true).
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Having access to sign-in activity, audits and risk events for Azure AD is crucia
#### Logs recommended reading -- [Azure Active Directory audit API reference](/graph/api/resources/directoryaudit?view=graph-rest-beta%3fview%3dgraph-rest-beta)-- [Azure Active Directory sign-in activity report API reference](/graph/api/resources/signin?view=graph-rest-beta%3fview%3dgraph-rest-beta)
+- [Azure Active Directory audit API reference](/graph/api/resources/directoryaudit?view=graph-rest-beta)
+- [Azure Active Directory sign-in activity report API reference](/graph/api/resources/signin?view=graph-rest-beta)
- [Get data using the Azure AD Reporting API with certificates](../reports-monitoring/tutorial-access-api-with-certificates.md) - [Microsoft Graph for Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md) - [Office 365 Management Activity API reference](/office/office-365-management-api/office-365-management-activity-api-reference)
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
Use the following criteria when creating a new service account.
Use the following settings with user accounts used as service accounts:
-* [**Account Expiry**](/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps): set the service account to automatically expire a set time after its review period unless it's determined that it should continue
+* [**Account Expiry**](/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps&preserve-view=true): set the service account to automatically expire a set time after its review period unless it's determined that it should continue
* **LogonWorkstations**: restrict permissions for where the service account can sign in. If it runs locally on a machine and accesses only resources on that machine, restrict it from logging on anywhere else.
Use the following process for lifecycle management of service accounts:
### Collect usage information for the service account
-Collect the relevant business information for each service account. The below table shows minimum information to be collected, but you should collect everything necessary to make the business case for the accountsΓÇÖ existence.
+Collect the relevant business information for each service account. The below table shows minimum information to be collected, but you should collect everything necessary to make the business case for the accounts' existence.
| Data| Details | | - | - |
The risk assessment, once conducted and documented, may have impact on:
Create service account only after relevant information is documented in your CMDB and you perform a risk assessment. Account restrictions should be aligned to risk assessment. Consider the following restrictions when relevant to you assessment.:
-* [Account Expiry](/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps)
+* [Account Expiry](/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps&preserve-view=true)
- * For all user accounts used as service accounts, define a realistic and definite end-date for use. Set this using the ΓÇ£Account ExpiresΓÇ¥ flag. For more details, refer to[ Set-ADAccountExpiration](/powershell/module/addsadministration/set-adaccountexpiration).
+ * For all user accounts used as service accounts, define a realistic and definite end-date for use. Set this using the "Account Expires" flag. For more details, refer to[ Set-ADAccountExpiration](/powershell/module/addsadministration/set-adaccountexpiration).
* Log On To ([LogonWorkstation](/powershell/module/addsadministration/set-aduser))
After removing all permissions, use this process for removing the account.
3. Delete the service account after the remain disabled policy is fulfilled.
- * For MSAs, you can [uninstall it](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps) using PowerShell or delete manually from the managed service account container.
+ * For MSAs, you can [uninstall it](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps&preserve-view=true) using PowerShell or delete manually from the managed service account container.
* For computer or user accounts, you can manually delete the account from in Active Directory.
active-directory Service Accounts Group Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-group-managed.md
gMSAs offer a single identity solution with greater security while reducing admi
Use gMSAs as the preferred account type for on-premises services unless a service, such as Failover Clustering, doesn't support it. > [!IMPORTANT]
-> You must test your service with gMSAs prior to deployment into production. To do so, set up a test environment and ensure the application can use the gMSA, and access the resources it needs to access. For more information, see [Support for group managed service accounts](/system-center/scom/support-group-managed-service-accounts?view=sc-om-2019).
+> You must test your service with gMSAs prior to deployment into production. To do so, set up a test environment and ensure the application can use the gMSA, and access the resources it needs to access. For more information, see [Support for group managed service accounts](/system-center/scom/support-group-managed-service-accounts).
If a service doesn't support the use of gMSAs, your next best option is to use a standalone Managed Service Account (sMSA). sMSAs provide the same functionality as a gMSA, but are intended for deployment on a single server only.
If you can't use a gMSA or sMSA is supported by your service, then the service m
## Assess the security posture of gMSAs
-gMSAs are inherently more secure than standard user accounts, which require ongoing password management. However, it's important to consider gMSAsΓÇÖ scope of access as you look at their overall security posture.
+gMSAs are inherently more secure than standard user accounts, which require ongoing password management. However, it's important to consider gMSAs' scope of access as you look at their overall security posture.
The following table shows potential security issues and mitigations for using gMSAs.
Get-ADServiceAccount -Filter *
# To filter results to only gMSAs:
-Get-ADServiceAccount ΓÇôFilter * | where $_.ObjectClass -eq "msDS-GroupManagedServiceAccountΓÇ¥}
+Get-ADServiceAccount ΓÇôFilter * | where $_.ObjectClass -eq "msDS-GroupManagedServiceAccount"}
``` ## Manage gMSAs
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-managed-identities.md
You can get a list of all managed identities in your tenant with the following G
`https://graph.microsoft.com/v1.0/servicePrincipals?$filter=(servicePrincipalType eq 'ManagedIdentity') `
-You can filter these requests. For more information, see the Graph documentation for [GET servicePrincipal](/graph/api/serviceprincipal-get?view=).
+You can filter these requests. For more information, see the Graph documentation for [GET servicePrincipal](/graph/api/serviceprincipal-get).
## Assess the security of managed identities
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
The following scenarios are not supported for staged rollout:
To test the *password hash sync* sign-in by using staged rollout, follow the pre-work instructions in the next section.
-For information about which PowerShell cmdlets to use, see [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview#staged_rollout).
+For information about which PowerShell cmdlets to use, see [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout).
## Pre-work for password hash sync
A: No, this feature is designed for testing cloud authentication. After successf
**Q: Can I use PowerShell to perform staged rollout?**
-A: Yes. To learn how to use PowerShell to perform staged rollout, see [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview#staged_rollout).
+A: Yes. To learn how to use PowerShell to perform staged rollout, see [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout).
## Next steps-- [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview#staged_rollout )
+- [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout )
- [Change the sign-in method to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) - [Change sign-in method to pass-through authentication](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-deployment-plan.md
The steps to deploy your Application Proxy are covered in this [tutorial for add
Publishing applications assumes that you have satisfied all the pre-requisites and that you have several connectors showing as registered and active in the Application Proxy page.
-You can also publish applications by using [PowerShell](/powershell/module/azuread/?view=azureadps-2.0-preview).
+You can also publish applications by using [PowerShell](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true).
Below are some best practices to follow when publishing an application:
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-powershell-samples.md
For more information about the cmdlets used in these samples, see [Application P
| [List basic information for all Application Proxy apps](scripts/powershell-get-all-app-proxy-apps-basic.md) | Lists basic information (AppId, DisplayName, ObjId) about all the Application Proxy apps in your directory. | | [List extended information for all Application Proxy apps](scripts/powershell-get-all-app-proxy-apps-extended.md) | Lists extended information (AppId, DisplayName, ExternalUrl, InternalUrl, ExternalAuthenticationType) about all the Application Proxy apps in your directory. | | [List all Application Proxy apps by connector group](scripts/powershell-get-all-app-proxy-apps-by-connector-group.md) | Lists information about all the Application Proxy apps in your directory and which connector groups the apps are assigned to. |
-| [Get all Application Proxy apps with a token lifetime policy](scripts/powershell-get-all-app-proxy-apps-with-policy.md) | Lists all Application Proxy apps in your directory with a token lifetime policy and its details. This sample requires the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview). |
+| [Get all Application Proxy apps with a token lifetime policy](scripts/powershell-get-all-app-proxy-apps-with-policy.md) | Lists all Application Proxy apps in your directory with a token lifetime policy and its details. This sample requires the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true). |
|**Connector groups**|| | [Get all connector groups and connectors in the directory](scripts/powershell-get-all-connectors.md) | Lists all the connector groups and connectors in your directory. | | [Move all apps assigned to a connector group to another connector group](scripts/powershell-move-all-apps-to-connector-group.md) | Moves all applications currently assigned to a connector group to a different connector group. |
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/howto-saml-token-encryption.md
When you configure a keyCredential using Graph, PowerShell, or in the applicatio
1. Use the latest Azure AD PowerShell module to connect to your tenant.
-1. Set the token encryption settings using the **[Set-AzureApplication](/powershell/module/azuread/set-azureadapplication?view=azureadps-2.0-preview)** command.
+1. Set the token encryption settings using the **[Set-AzureApplication](/powershell/module/azuread/set-azureadapplication?view=azureadps-2.0-preview&preserve-view=true)** command.
``` Set-AzureADApplication -ObjectId <ApplicationObjectId> -KeyCredentials "<KeyCredentialsObject>" -TokenEncryptionKeyId <keyID>
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-with-policy.md
This sample requires the [AzureAD V2 PowerShell for Graph module preview version
||| |[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. | |[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
-|[Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview) | Gets a policy in Azure AD. |
-|[Get-AzureADServicePrincipalPolicy](/powershell/module/azuread/get-azureadserviceprincipalpolicy?view=azureadps-2.0-preview) | Gets the policy of a service principal in Azure AD. |
+|[Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) | Gets a policy in Azure AD. |
+|[Get-AzureADServicePrincipalPolicy](/powershell/module/azuread/get-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) | Gets the policy of a service principal in Azure AD. |
## Next steps
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/docusign-tutorial.md
Previously updated : 09/09/2020 Last updated : 03/26/2021
In this tutorial, you'll configure and test Azure AD SSO in a test environment t
* DocuSign supports [automatic user provisioning](./docusign-provisioning-tutorial.md).
-* Once you configure DocuSign you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
- ## Adding DocuSign from the gallery To configure the integration of DocuSign into Azure AD, you must add DocuSign from the gallery to your list of managed SaaS apps:
To enable Azure AD SSO in the Azure portal, follow these steps:
1. In the Azure portal, on the **DocuSign** application integration page, find the **Manage** section, and then select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, select the pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
To enable Azure AD SSO in the Azure portal, follow these steps:
| Reply URL | |-|
- |`https://<subdomain>.docusign.com/organizations/<OrganizationID>/saml2/login/<IDPID>`|
- |`https://<subdomain>.docusign.net/SAML/`|
+ | Production : |
+ | `https://<subdomain>.docusign.com/organizations/<OrganizationID>/saml2/login/<IDPID>` |
+ | `https://<subdomain>.docusign.net/SAML/` |
+ | QA Instance :|
+ | `https://<SUBDOMAIN>.docusign.com/organizations/saml2` |
> [!NOTE] > These bracketed values are placeholders. Replace them with the values in the actual sign-on URL, Identifier and Reply URL. These details are explained in the "View SAML 2.0 Endpoints" section later in this tutorial.
In this section, you test your Azure AD single sign-on configuration with follow
2. Go to DocuSign Sign-on URL directly and initiate the login flow from there.
-3. You can use Microsoft Access Panel. When you click the DocuSign tile in the Access Panel, you should be automatically signed in to the DocuSign for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+3. You can use Microsoft My Apps. When you click the DocuSign tile in the My Apps, you should be automatically signed in to the DocuSign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next Steps
Once you configure DocuSign you can enforce Session control, which protects exfi
<!--Image references-->
-[50]: ./media/docusign-tutorial/tutorial_docusign_18.png
-[51]: ./media/docusign-tutorial/tutorial_docusign_21.png
-[52]: ./media/docusign-tutorial/tutorial_docusign_22.png
-[53]: ./media/docusign-tutorial/tutorial_docusign_23.png
-[54]: ./media/docusign-tutorial/tutorial_docusign_19.png
-[55]: ./media/docusign-tutorial/tutorial_docusign_20.png
-[56]: ./media/docusign-tutorial/tutorial_docusign_24.png
-[57]: ./media/docusign-tutorial/tutorial_docusign_25.png
-[58]: ./media/docusign-tutorial/tutorial_docusign_26.png
-[59]: ./media/docusign-tutorial/tutorial_docusign_27.png
-[60]: ./media/docusign-tutorial/tutorial_docusign_28.png
-[61]: ./media/docusign-tutorial/tutorial_docusign_29.png
-[62]: ./media/docusign-tutorial/tutorial_docusign_30.png
+[50]: ./media/docusign-tutorial/tutorial-docusign-18.png
+[51]: ./media/docusign-tutorial/tutorial-docusign-21.png
+[52]: ./media/docusign-tutorial/tutorial-docusign-22.png
+[53]: ./media/docusign-tutorial/tutorial-docusign-23.png
+[54]: ./media/docusign-tutorial/tutorial-docusign-19.png
+[55]: ./media/docusign-tutorial/tutorial-docusign-20.png
+[56]: ./media/docusign-tutorial/tutorial-docusign-24.png
+[57]: ./media/docusign-tutorial/tutorial-docusign-25.png
+[58]: ./media/docusign-tutorial/tutorial-docusign-26.png
+[59]: ./media/docusign-tutorial/tutorial-docusign-27.png
+[60]: ./media/docusign-tutorial/tutorial-docusign-28.png
+[61]: ./media/docusign-tutorial/tutorial-docusign-29.png
+[62]: ./media/docusign-tutorial/tutorial-docusign-30.png
active-directory Equinix Federation App Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/equinix-federation-app-tutorial.md
Previously updated : 11/27/2020 Last updated : 03/22/2021 # Tutorial: Azure Active Directory single sign-on (SSO) integration with Equinix Federation App
-In this tutorial, you'll learn how to integrate Equinix Federation App with Azure Active Directory (Azure AD). When you integrate Equinix Federation App with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Equinix Federation App with Azure Active Directory (Azure AD). When you integrate Equinix Federation App with Azure AD, you can do the following:
* Control in Azure AD who has access to Equinix Federation App. * Enable your users to be automatically signed-in to Equinix Federation App with their Azure AD accounts.
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Equinix Federation App supports **SP** initiated SSO
+* Equinix Federation App supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
## Adding Equinix Federation App from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Equinix Federation App** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.equinix.com/sp/ACS.saml2`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `Equinix:<CUSTOM_IDENTIFIER>`
-
- c. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.equinix.com/sp/ACS.saml2`
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<customerprefix>customerportal.equinix.com`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [Equinix Federation App Client support team](mailto:prodsecops@equinix.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Sign on URL value is not real. Update the value with the actual Sign on URL. Contact [Equinix Federation App Client support team](mailto:prodsecops@equinix.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Equinix Federation App** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you create a user called Britta Simon in Equinix Federation App
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Equinix Federation App Sign-on URL where you can initiate the login flow.
+Go to Equinix Federation App Sign-on URL directly, and initiate the login flow from there.
-* Go to Equinix Federation App Sign-on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you click the Equinix Federation App tile in the My Apps, this will redirect to Equinix Federation App Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+ > [!NOTE]
+ > If you attempt to test your Azure application by using the **Test this application** link or by clicking the Equinix Federation App tile, it will not work, as that is IdP-initiated SSO, which Equinix does not support by default. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Equinix Federation App you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Equinix Federation App you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Scalex Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/scalex-enterprise-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://platform.rescale.com/saml2/<company id>/sso/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [ScaleX Enterprise Client support team](https://info.rescale.com/contact_sales) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [ScaleX Enterprise Client support team](https://about.rescale.com/contactus.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Your ScaleX Enterprise application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **emailaddress** is mapped with **user.mail**. ScaleX Enterprise application expects **emailaddress** to be mapped with **user.userprincipalname**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
When you click the ScaleX Enterprise tile in the Access Panel, you should be aut
- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Try ScaleX Enterprise with Azure AD](https://aad.portal.azure.com/)
+- [Try ScaleX Enterprise with Azure AD](https://aad.portal.azure.com/)
active-directory Tutorial List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
To find more tutorials, use the table of contents on the left.
| ![logo-ArcGIS Enterprise](./medi)| | ![logo-AskYourTeam](./medi)| | ![logo-Atlassian Cloud](./medi)|
+| ![logo-AWS Single Sign-On](./medi)|
| ![logo-Box](./medi)| | ![logo-CakeHR](./medi)| | ![logo-Carbonite Endpoint Backup](./medi)|
active-directory My Apps Portal End User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-end-user-access.md
Previously updated : 01/19/2021 Last updated : 03/26/2021
If you donΓÇÖt have access to the **My Apps** portal, contact your organization'
## Supported browsers
-You can get to the **My Apps** portal from any of the following web browsers:
+You can get to the **My Apps** portal from any of the following web browser. Microsoft recommends that you use the most up-to-date browser that's compatible with your operating system.
-- Microsoft Edge (the mobile version of Edge is currently the only supported mobile browser)-- Google Chrome-- Mozilla Firefox, version 26.0 or later
+- Microsoft Edge (latest version, desktop and mobile)
+- Safari (latest version, Mac and iOS)
+- Chrome (latest version, desktop and mobile)
+- Firefox (latest version)
You can access and use the My Apps portal on your computer, or from the mobile version of the Edge browser on an iOS or Android mobile device.
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/managed-aad.md
Make sure the admin of the security group has given your account an *Active* ass
[kubernetes-webhook]:https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
-[aad-pricing]: /azure/pricing/details/active-directory
+[aad-pricing]: https://azure.microsoft.com/pricing/details/active-directory/
<!-- LINKS - Internal --> [aad-conditional-access]: ../active-directory/conditional-access/overview.md
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/update-credentials.md
az aks update-credentials \
--client-secret $SP_SECRET ```
-For small and medium size clusters, it takes a few moments for the service principal credentials to be updated in the AKS.
+For small and midsize clusters, it takes a few moments for the service principal credentials to be updated in the AKS.
## Update AKS Cluster with new AAD Application credentials
app-service Deploy Ci Cd Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-ci-cd-custom-container.md
From the left menu, click **Deployment Center** > **Settings**.
**Choose** the deployment source depends on your scenario: - **Container registry** sets up CI/CD between your container registry and App Service. - The **GitHub Actions** option is for you if you maintain the source code for your container image in GitHub. Triggered by new commits to your GitHub repository, the deploy action can run `docker build` and `docker push` directly to your container registry, then update your App Service app to run the new image. For more information, see [How CI/CD works with GitHub Actions](#how-cicd-works-with-github-actions).-- To set up CI/CD with **Azure Pipelines**, see [Deploy an Azure Web App Container from Azure Pipelines](/devops/pipelines/targets/webapp-on-container-linux).
+- To set up CI/CD with **Azure Pipelines**, see [Deploy an Azure Web App Container from Azure Pipelines](/azure/devops/pipelines/targets/webapp-on-container-linux).
> [!NOTE] > For a Docker Compose app, select **Container Registry**.
avere-vfxt Avere Vfxt Demo Links https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/avere-vfxt/avere-vfxt-demo-links.md
Sample tutorials are provided on [GitHub](https://github.com/Azure/Avere). These
## vFXT performance
-* [Measure vFXT performance with vdbench](https://github.com/Azure/Avere/blob/master/docs/vdbench.md) - A basic test setup to generate small and medium-sized workloads to test the vFXT memory and disk subsystems
+* [Measure vFXT performance with vdbench](https://github.com/Azure/Avere/blob/master/docs/vdbench.md) - A basic test setup to generate small and midsize workloads to test the vFXT memory and disk subsystems
## Client setup
azure-arc Restore Adventureworks Sample Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db.md
# Restore the AdventureWorks sample database into SQL Managed Instance - Azure Arc
-[AdventureWorks](/sql/samples/adventureworks-install-configure?view=sql-server-ver15&tabs=tsql&preserve-view=true) is a sample database containing an OLTP database that is often used in tutorials, and examples. It is provided and maintained by Microsoft as part of the [SQL Server samples GitHub repository](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases).
+[AdventureWorks](/sql/samples/adventureworks-install-configure) is a sample database containing an OLTP database that is often used in tutorials, and examples. It is provided and maintained by Microsoft as part of the [SQL Server samples GitHub repository](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases).
This document describes a simple process to get the AdventureWorks sample database restored into your SQL Managed Instance - Azure Arc.
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
You can mix and match services when you build an orchestration, calling function
| **Connectivity** | [About a dozen built-in binding types](functions-triggers-bindings.md#supported-bindings), write code for custom bindings | [Large collection of connectors](../connectors/apis-list.md), [Enterprise Integration Pack for B2B scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md), [build custom connectors](../logic-apps/custom-connector-overview.md) | | **Actions** | Each activity is an Azure function; write code for activity functions |[Large collection of ready-made actions](../logic-apps/logic-apps-workflow-actions-triggers.md)| | **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor logs](../logic-apps/monitor-logic-apps.md)|
-| **Management** | [REST API](durable/durable-functions-http-api.md), [Visual Studio](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer?view=vs-2019) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [REST API](/rest/api/logic/), [PowerShell](/powershell/module/az.logicapp), [Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md) |
+| **Management** | [REST API](durable/durable-functions-http-api.md), [Visual Studio](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [REST API](/rest/api/logic/), [PowerShell](/powershell/module/az.logicapp), [Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md) |
| **Execution context** | Can run [locally](functions-runtime-overview.md) or in the cloud | Runs only in the cloud| <a name="function"></a>
azure-functions Functions Test A Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-test-a-function.md
To set up your environment, create a Function and test app. The following steps
3. [Create a timer function from the template](./functions-create-scheduled-function.md) and name it **MyTimerTrigger**. 4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**. 5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/)
-6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project?view=vs-2017) from *Functions.Tests* app.
+6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project) from *Functions.Tests* app.
### Create test classes
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
Starting from version 3.0.2, specific auto-collected telemetry can be suppressed
} ```
+> NOTE
+> If you are looking for more fine-grained control, e.g. to suppress some redis calls but not all redis calls,
+> see [sampling overrides](./java-standalone-sampling-overrides.md).
++ ## Heartbeat By default, Application Insights Java 3.0 sends a heartbeat metric once every 15 minutes. If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger.md
Version 15.2 (or above) of Visual Studio 2017 publishes symbols for release buil
For Azure Compute and other types, make sure that the symbol files are in the same folder of the main application .dll (typically, `wwwroot/bin`) or are available on the current path. > [!NOTE]
-> For more information on the different symbol options that are available consult the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp?view=vs-2019#output
-). For best results, we recommend using ΓÇ£FullΓÇ¥, ΓÇ£PortableΓÇ¥ or ΓÇ£EmbeddedΓÇ¥.
+> For more information on the different symbol options that are available consult the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp?view=vs-2019&preserve-view=true#output
+). For best results, we recommend using "Full", "Portable" or "Embedded".
### Optimized builds In some cases, local variables can't be viewed in release builds because of optimizations that are applied by the JIT compiler.
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
Review the [API reference](status-monitor-v2-api-reference.md) for a detailed de
4. Try to browse to your app. 5. After your app is loaded, return to PerfView and select **Stop Collection**.
+### How to capture full SQL command text
+To capture full SQL command text you need to modify the applicationinsights.config file with the following:
+
+```xml
+<Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">,
+<EnableSqlCommandTextInstrumentation>true</EnableSqlCommandTextInstrumentation>
+</Add>
+```
## Next steps
azure-monitor Visual Studio Codelens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/visual-studio-codelens.md
# Application Insights telemetry in Visual Studio CodeLens
-Methods in the code of your web app can be annotated with telemetry about run-time exceptions and request response times. If you install [Azure Application Insights](./app-insights-overview.md) in your application, the telemetry appears in Visual Studio [CodeLens](/visualstudio/ide/find-code-changes-and-other-history-with-codelens?view=vs-2015) - the notes at the top of each function where you're used to seeing useful information such as the number of places the function is referenced or the last person who edited it.
+Methods in the code of your web app can be annotated with telemetry about run-time exceptions and request response times. If you install [Azure Application Insights](./app-insights-overview.md) in your application, the telemetry appears in Visual Studio [CodeLens](/visualstudio/ide/find-code-changes-and-other-history-with-codelens) - the notes at the top of each function where you're used to seeing useful information such as the number of places the function is referenced or the last person who edited it.
![CodeLens](./media/visual-studio-codelens/codelens-overview.png)
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/autoscale/autoscale-get-started.md
You can now set the number of instances that you want to scale to manually.
You can always return to Autoscale by clicking **Enable autoscale** and then **Save**.
+### Cool-down period effects
+
+Autoscale uses a cool-down period to prevent "flapping", which is the rapid, repetative up and down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). Other valuable information on flapping and understanding how to monitor the autoscale engine can be found in [Autoscale Best Practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md) respectively.
+ ## Route traffic to healthy instances (App Service) <a id="health-check-path"></a>
To learn more about moving resources between regions and disaster recovery in Az
- [Create an Activity Log Alert to monitor all Autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed Autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/monitor-autoscale-failed-alert) + <!--Reference--> [1]:https://portal.azure.com [2]: ./media/autoscale-get-started/azure-monitor-launch.png
To learn more about moving resources between regions and disaster recovery in Az
[11]: ./media/autoscale-get-started/scale-history.png [12]: ./media/autoscale-get-started/scale-definition-json.png [13]: ./media/autoscale-get-started/disable-autoscale.png
-[14]: ./media/autoscale-get-started/set-manualscale.png
+[14]: ./media/autoscale-get-started/set-manualscale.png
azure-resource-manager Create Uidefinition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/create-uidefinition-overview.md
description: Describes how to create user interface definitions for the Azure po
Previously updated : 07/14/2020 Last updated : 03/26/2021
The `config` property is optional. Use it to either override the default behavio
"constraints": { "validations": [ {
- "isValid": "[expression for checking]",
- "message": "Please select a valid subscription."
+ "isValid": "[not(contains(subscription().displayName, 'Test'))]",
+ "message": "Can't use test subscription."
}, {
- "permission": "<Resource Provider>/<Action>",
- "message": "Must have correct permission to complete this step."
+ "permission": "Microsoft.Compute/virtualmachines/write",
+ "message": "Must have write permission for the virtual machine."
+ },
+ {
+ "permission": "Microsoft.Compute/virtualMachines/extensions/write",
+ "message": "Must have write permission for the extension."
} ] }, "resourceProviders": [
- "<Resource Provider>"
+ "Microsoft.Compute"
] }, "resourceGroup": { "constraints": { "validations": [ {
- "isValid": "[expression for checking]",
- "message": "Please select a valid resource group."
+ "isValid": "[not(contains(resourceGroup().name, 'test'))]",
+ "message": "Resource group name can't contain 'test'."
} ] },
The `config` property is optional. Use it to either override the default behavio
}, ```
+For the `isValid` property, write an expression that resolves to either true or false. For the `permission` property, specify one of the [resource provider actions](../../role-based-access-control/resource-provider-operations.md).
+ ### Wizard The `isWizard` property enables you to require successful validation of each step before proceeding to the next step. When the `isWizard` property isn't specified, the default is **false**, and step-by-step validation isn't required.
-When `isWizard` is enabled, set to **true**, the **Basics** tab is available and all other tabs are disabled. When the **Next** button is selected the tab's icon indicates if a tab's validation passed or failed. After a tab's required fields are completed and validated the **Next** button allows navigation to the next tab. When all tabs pass validation, you can go to the **Review and Create** page and select the **Create** button to begin the deployment.
+When `isWizard` is enabled, set to **true**, the **Basics** tab is available and all other tabs are disabled. When the **Next** button is selected the tab's icon indicates if a tab's validation passed or failed. After a tab's required fields are completed and validated, the **Next** button allows navigation to the next tab. When all tabs pass validation, you can go to the **Review and Create** page and select the **Create** button to begin the deployment.
:::image type="content" source="./media/create-uidefinition-overview/tab-wizard.png" alt-text="Tab wizard":::
The basics config lets you customize the basics step.
For `description`, provide a markdown-enabled string that describes your resource. Multi-line format and links are supported.
-The `subscription` and `resourceGroup` elements enable you to specify additional validations. The syntax for specifying validations is identical to the custom validation for [text box](microsoft-common-textbox.md). You can also specify `permission` validations on the subscription or resource group.
+The `subscription` and `resourceGroup` elements enable you to specify more validations. The syntax for specifying validations is identical to the custom validation for [text box](microsoft-common-textbox.md). You can also specify `permission` validations on the subscription or resource group.
The subscription control accepts a list of resource provider namespaces. For example, you can specify **Microsoft.Compute**. It shows an error message when the user selects a subscription that doesn't support the resource provider. The error occurs when the resource provider isn't registered on that subscription, and the user doesn't have permission to register the resource provider.
The following example shows a text box that has been added to the default elemen
## Steps
-The steps property contains zero or more additional steps to display after basics. Each step contains one or more elements. Consider adding steps per role or tier of the application being deployed. For example, add a step for master node inputs, and a step for the worker nodes in a cluster.
+The steps property contains zero or more steps to display after basics. Each step contains one or more elements. Consider adding steps per role or tier of the application being deployed. For example, add a step for primary node inputs, and a step for the worker nodes in a cluster.
```json "steps": [
azure-sql Active Directory Interactive Connect Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/active-directory-interactive-connect-azure-sql-db.md
The C# example relies on the [`System.Data.SqlClient`](/dotnet/api/system.data.s
Use this value for authentication that requires an Azure AD user name and password. Azure SQL Database does the authentication. This method doesn't support Multi-Factor Authentication. > [!NOTE]
-> If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient?view=sqlclient-dotnet-core-1.1) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
+> If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
## Set C# parameter values from the Azure portal
For more information, see [Configure Multi-Factor Authentication for SSMS and Az
## C# code example > [!NOTE]
-> If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient?view=sqlclient-dotnet-core-1.1) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
+> If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
The example C# program relies on the [*Microsoft.IdentityModel.Clients.ActiveDirectory*](/dotnet/api/microsoft.identitymodel.clients.activedirectory) DLL assembly.
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
To grant your SQL Managed Instance Azure AD read permission using the Azure port
The process of changing the administrator may take several minutes. Then the new administrator appears in the Active Directory admin box.
-After provisioning an Azure AD admin for your SQL Managed Instance, you can begin to create Azure AD server principals (logins) with the <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current">CREATE LOGIN</a> syntax. For more information, see [SQL Managed Instance overview](../managed-instance/sql-managed-instance-paas-overview.md#azure-active-directory-integration).
+After provisioning an Azure AD admin for your SQL Managed Instance, you can begin to create Azure AD server principals (logins) with the [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) syntax. For more information, see [SQL Managed Instance overview](../managed-instance/sql-managed-instance-paas-overview.md#azure-active-directory-integration).
> [!TIP] > To later remove an Admin, at the top of the Active Directory admin page, select **Remove admin**, and then select **Save**.
On all client machines, from which your applications or users connect to SQL Dat
- .NET Framework 4.6 or later from [https://msdn.microsoft.com/library/5a4x27ek.aspx](/dotnet/framework/install/guide-for-developers). - Azure Active Directory Authentication Library for SQL Server (*ADAL.DLL*). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains the *ADAL.DLL* library. - [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms)
- - [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15)
- - [OLE DB Driver 18 for SQL Server](/sql/connect/oledb/download-oledb-driver-for-sql-server?view=sql-server-ver15)
+ - [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15&preserve-view=true)
+ - [OLE DB Driver 18 for SQL Server](/sql/connect/oledb/download-oledb-driver-for-sql-server?view=sql-server-ver15&preserve-view=true)
You can meet these requirements by:
You can meet these requirements by:
## Create contained users mapped to Azure AD identities
-Because SQL Managed Instance supports Azure AD server principals (logins), using contained database users is not required. Azure AD server principals (logins) enable you to create logins from Azure AD users, groups, or applications. This means that you can authenticate with your SQL Managed Instance by using the Azure AD server login rather than a contained database user. For more information, see [SQL Managed Instance overview](../managed-instance/sql-managed-instance-paas-overview.md#azure-active-directory-integration). For syntax on creating Azure AD server principals (logins), see <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current">CREATE LOGIN</a>.
+Because SQL Managed Instance supports Azure AD server principals (logins), using contained database users is not required. Azure AD server principals (logins) enable you to create logins from Azure AD users, groups, or applications. This means that you can authenticate with your SQL Managed Instance by using the Azure AD server login rather than a contained database user. For more information, see [SQL Managed Instance overview](../managed-instance/sql-managed-instance-paas-overview.md#azure-active-directory-integration). For syntax on creating Azure AD server principals (logins), see [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true).
However, using Azure Active Directory authentication with SQL Database and Azure Synapse requires using contained database users based on an Azure AD identity. A contained database user does not have a login in the master database, and maps to an identity in Azure AD that is associated with the database. The Azure AD identity can be either an individual user account or a group. For more information about contained database users, see [Contained Database Users- Making Your Database Portable](/sql/relational-databases/security/contained-database-users-making-your-database-portable).
azure-sql Authentication Aad Directory Readers Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-directory-readers-role.md
The **Directory Readers** role is necessary to:
- Create Azure AD logins for SQL Managed Instance - Impersonate Azure AD users in Azure SQL-- Migrate SQL Server users that use Windows authentication to SQL Managed Instance with Azure AD authentication (using the [ALTER USER (Transact-SQL)](/sql/t-sql/statements/alter-user-transact-sql?view=azuresqldb-mi-current#d-map-the-user-in-the-database-to-an-azure-ad-login-after-migration) command)
+- Migrate SQL Server users that use Windows authentication to SQL Managed Instance with Azure AD authentication (using the [ALTER USER (Transact-SQL)](/sql/t-sql/statements/alter-user-transact-sql?view=azuresqldb-mi-current&preserve-view=true#d-map-the-user-in-the-database-to-an-azure-ad-login-after-migration) command)
- Change the Azure AD admin for SQL Managed Instance - Allow [service principals (Applications)](authentication-aad-service-principal.md) to create Azure AD users in Azure SQL
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
To migrate a database into an [Azure SQL Managed Instance](../managed-instance/s
1. Select the storage account and the container for the BACPAC file and then select the BACPAC file from which to import.
-1. Specify the new database size (usually the same as origin) and provide the destination SQL Server credentials. For a list of possible values for a new database in Azure SQL Database, see [Create Database](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current).
+1. Specify the new database size (usually the same as origin) and provide the destination SQL Server credentials. For a list of possible values for a new database in Azure SQL Database, see [Create Database](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current&preserve-view=true).
![Database import2](./media/database-import/sql-server-import-database-settings.png)
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.
> [A SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) does not currently support migrating a database into an instance database from a BACPAC file using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage. > [!NOTE]
-> The machines processing import/export requests submitted through portal or Powershell need to store the bacpac file as well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request only have 450GB local disk space. As result, some requests may fail with ΓÇ£There is not enough space on the diskΓÇ¥ error. In this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting databases larger than 150GB, use SqlPackage to avoid this issue.
+> The machines processing import/export requests submitted through portal or Powershell need to store the bacpac file as well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting databases larger than 150GB, use SqlPackage to avoid this issue.
# [PowerShell](#tab/azure-powershell)
azure-sql Develop Cplusplus Simple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/develop-cplusplus-simple.md
Make sure you have the following items:
* An active Azure account. If you don't have one, you can sign up for a [Free Azure Trial](https://azure.microsoft.com/pricing/free-trial/). * [Visual Studio](https://www.visualstudio.com/downloads/). You must install the C++ language components to build and run this sample.
-* [Visual Studio Linux Development](/cpp/linux/?view=vs-2019). If you are developing on Linux, you must also install the Visual Studio Linux extension.
+* [Visual Studio Linux Development](/cpp/linux/). If you are developing on Linux, you must also install the Visual Studio Linux extension.
## <a id="AzureSQL"></a>Azure SQL Database and SQL Server on virtual machines
azure-sql Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/disaster-recovery-guidance.md
Use the [Get Recoverable Database](/previous-versions/azure/reference/dn800985(v
## Wait for service recovery
-The Azure teams work diligently to restore service availability as quickly as possible but depending on the root cause it can take hours or days. If your application can tolerate significant downtime you can simply wait for the recovery to complete. In this case, no action on your part is required. You can see the current service status on our [Azure Service Health Dashboard](https://azure.microsoft.com/status/). After the recovery of the region, your applicationΓÇÖs availability is restored.
+The Azure teams work diligently to restore service availability as quickly as possible but depending on the root cause it can take hours or days. If your application can tolerate significant downtime you can simply wait for the recovery to complete. In this case, no action on your part is required. You can see the current service status on our [Azure Service Health Dashboard](https://azure.microsoft.com/status/). After the recovery of the region, your application's availability is restored.
## Fail over to geo-replicated secondary server in the failover group
Use one of the following guides to fail over to a geo-replicated secondary datab
- [Fail over to a geo-replicated secondary server using the Azure portal](active-geo-replication-configure-portal.md) - [Fail over to the secondary server using PowerShell](scripts/setup-geodr-and-failover-database-powershell.md)-- [Fail over to a secondary server using Transact-SQL (T-SQL)](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current#e-failover-to-a-geo-replication-secondary)
+- [Fail over to a secondary server using Transact-SQL (T-SQL)](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true#e-failover-to-a-geo-replication-secondary)
## Recover using geo-restore
If you are using geo-restore to recover from an outage, you must make sure that
### Update connection strings
-Because your recovered database resides in a different server, you need to update your applicationΓÇÖs connection string to point to that server.
+Because your recovered database resides in a different server, you need to update your application's connection string to point to that server.
For more information about changing connection strings, see the appropriate development language for your [connection library](connect-query-content-reference-guide.md#libraries).
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
This table provides a quick comparison for the change in terminology:
| Feature | Details | | | |
-| <a href="/azure/azure-sql/database/elastic-transactions-overview">Distributed transactions</a> | Distributed transactions across Managed Instances. |
-| <a href="/azure/sql-database/sql-database-instance-pools">Instance pools</a> | A convenient and cost-efficient way to migrate smaller SQL instances to the cloud. |
-| <a href="/en-gb/sql/t-sql/statements/create-login-transact-sql">Instance-level Azure AD server principals (logins)</a> | Create instance-level logins using a <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true">CREATE LOGIN FROM EXTERNAL PROVIDER</a> statement. |
+| [Distributed transactions](/azure/azure-sql/database/elastic-transactions-overview) | Distributed transactions across Managed Instances. |
+| [Instance pools](/azure/sql-database/sql-database-instance-pools) | A convenient and cost-efficient way to migrate smaller SQL instances to the cloud. |
+| [Instance-level Azure AD server principals (logins)](/sql/t-sql/statements/create-login-transact-sql) | Create instance-level logins using a [CREATE LOGIN FROM EXTERNAL PROVIDER](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) statement. |
| [Transactional Replication](../managed-instance/replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](../managed-instance/replication-between-two-instances-configure-tutorial.md). | | Threat detection |For information, see [Configure threat detection in Azure SQL Managed Instance](../managed-instance/threat-detection-configure.md).|
-| Long-term backup retention | For information, see [Configure long-term back up retention in Azure SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md), which is currently in limited public preview. |
+| Long-term backup retention | For information, see [Configure long-term back up retention in Azure SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md), which is currently in limited public preview. |
BULK INSERT Sales.Invoices FROM 'inv-2017-12-08.csv' WITH (DATA_SOURCE = 'MyAzur
In some circumstances there might exist an issue with Service Principal used to access Azure AD and Azure Key Vault (AKV) services. As a result, this issue impacts usage of Azure AD authentication and Transparent Database Encryption (TDE) with SQL Managed Instance. This might be experienced as an intermittent connectivity issue, or not being able to run statements such are CREATE LOGIN/USER FROM EXTERNAL PROVIDER or EXECUTE AS LOGIN/USER. Setting up TDE with customer-managed key on a new Azure SQL Managed Instance might also not work in some circumstances.
-**Workaround**: To prevent this issue from occurring on your SQL Managed Instance before executing any update commands, or in case you have already experienced this issue after update commands, go to Azure portal, access SQL Managed Instance [Active Directory admin blade](./authentication-aad-configure.md?tabs=azure-powershell#azure-portal). Verify if you can see the error message "Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service PrincipalΓÇ¥. In case you have encountered this error message, click on it, and follow the step-by-step instructions provided until this error have been resolved.
+**Workaround**: To prevent this issue from occurring on your SQL Managed Instance before executing any update commands, or in case you have already experienced this issue after update commands, go to Azure portal, access SQL Managed Instance [Active Directory admin blade](./authentication-aad-configure.md?tabs=azure-powershell#azure-portal). Verify if you can see the error message "Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal". In case you have encountered this error message, click on it, and follow the step-by-step instructions provided until this error have been resolved.
### Restoring manual backup without CHECKSUM might fail
azure-sql Logical Servers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/logical-servers.md
To create and manage servers, databases, and firewalls with Transact-SQL, use th
| Command | Description | | | |
-|[CREATE DATABASE (Azure SQL Database)](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current) | Creates a new database in Azure SQL Database. You must be connected to the master database to create a new database.|
-|[CREATE DATABASE (Azure Synapse)](/sql/t-sql/statements/create-database-transact-sql?view=azure-sqldw-latest) | Creates a new data warehouse database in Azure Synapse. You must be connected to the master database to create a new database.|
-| [ALTER DATABASE (Azure SQL Database)](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current) |Modifies database or elastic pool. |
-|[ALTER DATABASE (Azure Synapse Analytics)](/sql/t-sql/statements/alter-database-transact-sql?view=sql-server-ver15)|Modifies a data warehouse database in Azure Synapse.|
+|[CREATE DATABASE (Azure SQL Database)](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current&preserve-view=true) | Creates a new database in Azure SQL Database. You must be connected to the master database to create a new database.|
+|[CREATE DATABASE (Azure Synapse)](/sql/t-sql/statements/create-database-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Creates a new data warehouse database in Azure Synapse. You must be connected to the master database to create a new database.|
+| [ALTER DATABASE (Azure SQL Database)](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true) |Modifies database or elastic pool. |
+|[ALTER DATABASE (Azure Synapse Analytics)](/sql/t-sql/statements/alter-database-transact-sql?view=azure-sqldw-latest&preserve-view=true&tabs=sqlpool)|Modifies a data warehouse database in Azure Synapse.|
|[DROP DATABASE (Transact-SQL)](/sql/t-sql/statements/drop-database-transact-sql)|Deletes a database.| |[sys.database_service_objectives (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-database-service-objectives-azure-sql-database)|Returns the edition (service tier), service objective (pricing tier), and elastic pool name, if any, for a database. If logged on to the master database for a server, returns information on all databases. For Azure Synapse, you must be connected to the master database.| |[sys.dm_db_resource_stats (Azure SQL Database)](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database)| Returns CPU, IO, and memory consumption for a database in Azure SQL Database. One row exists for every 15 seconds, even if there is no activity in the database.|
azure-sql Logins Create Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/logins-create-manage.md
At this point, your server or managed instance is only configured for access usi
- Create an additional SQL login in the master database. - Add the login to the [sysadmin fixed server role](/sql/relational-databases/security/authentication-access/server-level-roles) using the [ALTER SERVER ROLE](/sql/t-sql/statements/alter-server-role-transact-sql) statement. This login will have full administrative permissions.
- - Alternatively, create an [Azure AD login](authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) using the [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current) syntax.
+ - Alternatively, create an [Azure AD login](authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) using the [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) syntax.
- **In SQL Database, create SQL logins with limited administrative permissions**
You can create accounts for non-administrative users using one of two methods:
For examples showing how to create logins and users, see: -- [Create login for Azure SQL Database](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current#examples-1)-- [Create login for Azure SQL Managed Instance](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current#examples-2)-- [Create login for Azure Synapse](/sql/t-sql/statements/create-login-transact-sql?view=azure-sqldw-latest#examples-3)
+- [Create login for Azure SQL Database](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1)
+- [Create login for Azure SQL Managed Instance](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true#examples-2)
+- [Create login for Azure Synapse](/sql/t-sql/statements/create-login-transact-sql?view=azure-sqldw-latest&preserve-view=true#examples-3)
- [Create user](/sql/t-sql/statements/create-user-transact-sql#examples) - [Creating Azure AD contained users](authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities)
azure-sql Secure Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/secure-database-tutorial.md
To set up a database-level firewall rule:
1. On the toolbar, select **Execute** to create the firewall rule. > [!NOTE]
-> You can also create a server-level firewall rule in SSMS by using the [sp_set_firewall_rule](/sql/relational-databases/system-stored-procedures/sp-set-firewall-rule-azure-sql-database?view=azuresqldb-current) command, though you must be connected to the *master* database.
+> You can also create a server-level firewall rule in SSMS by using the [sp_set_firewall_rule](/sql/relational-databases/system-stored-procedures/sp-set-firewall-rule-azure-sql-database?view=azuresqldb-current&preserve-view=true) command, though you must be connected to the *master* database.
## Create an Azure AD admin
azure-sql Security Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-best-practice.md
Central identity management offers the following benefits:
- Assign access rights to resources to Azure AD principals via group assignment: Create Azure AD groups, grant access to groups, and add individual members to the groups. In your database, create contained database users that map your Azure AD groups. To assign permissions inside the database, put the users that are associated with your Azure AD groups in database roles with the appropriate permissions. - See the articles, [Configure and manage Azure Active Directory authentication with SQL](authentication-aad-configure.md) and [Use Azure AD for authentication with SQL](authentication-aad-overview.md). > [!NOTE]
- > In SQL Managed Instance, you can also create logins that map to Azure AD principals in the master database. See [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current).
+ > In SQL Managed Instance, you can also create logins that map to Azure AD principals in the master database. See [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true).
- Using Azure AD groups simplifies permission management and both the group owner, and the resource owner can add/remove members to/from the group.
Azure AD Multi-Factor Authentication helps provides additional security by requi
- Use Azure AD Interactive authentication mode for Azure SQL Database and Azure SQL Managed Instance where a password is requested interactively, followed by Multi-Factor Authentication: - Use Universal Authentication in SSMS. See the article, [Using Multi-factor Azure AD authentication with Azure SQL Database, SQL Managed Instance, Azure Synapse (SSMS support for Multi-Factor Authentication)](authentication-mfa-ssms-overview.md).
- - Use Interactive Authentication supported in SQL Server Data Tools (SSDT). See the article, [Azure Active Directory support in SQL Server Data Tools (SSDT)](/sql/ssdt/azure-active-directory?view=azuresqldb-current).
+ - Use Interactive Authentication supported in SQL Server Data Tools (SSDT). See the article, [Azure Active Directory support in SQL Server Data Tools (SSDT)](/sql/ssdt/azure-active-directory?view=azuresqldb-current&preserve-view=true).
- Use other SQL tools supporting Multi-Factor Authentication. - SSMS Wizard support for export/extract/deploy database
- - [sqlpackage.exe](/sql/tools/sqlpackage): option ΓÇÿ/uaΓÇÖ
+ - [sqlpackage.exe](/sql/tools/sqlpackage): option '/ua'
- [sqlcmd Utility](/sql/tools/sqlcmd-utility): option -G (interactive) - [bcp Utility](/sql/tools/bcp-utility): option -G (interactive)
For cases when passwords aren't avoidable, make sure they're secured.
- If avoiding passwords or secrets aren't possible, store user passwords and application secrets in Azure Key Vault and manage access through Key Vault access policies. -- Various app development frameworks may also offer framework-specific mechanisms for protecting secrets in the app. For example: [ASP.NET core app](/aspnet/core/security/app-secrets?tabs=windows&view=aspnetcore-2.1).
+- Various app development frameworks may also offer framework-specific mechanisms for protecting secrets in the app. For example: [ASP.NET core app](/aspnet/core/security/app-secrets?tabs=windows).
### Use SQL authentication for legacy applications
Discover columns that potentially contain sensitive data. What is considered sen
**Best practices**: -- Monitor the classification dashboard on a regular basis for an accurate assessment of the databaseΓÇÖs classification state. A report on the database classification state can be exported or printed to share for compliance and auditing purposes.
+- Monitor the classification dashboard on a regular basis for an accurate assessment of the database's classification state. A report on the database classification state can be exported or printed to share for compliance and auditing purposes.
- Continuously monitor the status of recommended sensitive data in SQL Vulnerability Assessment. Track the sensitive data discovery rule and identify any drift in the recommended columns for classification.
azure-sql Load From Csv With Bcp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/load-from-csv-with-bcp.md
To complete the steps in this article, you need:
* The bcp command-line utility installed * The sqlcmd command-line utility installed
-You can download the bcp and sqlcmd utilities from the [Microsoft sqlcmd Documentation][https://docs.microsoft.com/sql/tools/sqlcmd-utility?view=sql-server-ver15].
+You can download the bcp and sqlcmd utilities from the [Microsoft sqlcmd Documentation](/sql/tools/sqlcmd-utility?view=sql-server-ver15&preserve-view=true).
### Data in ASCII or UTF-16 format
azure-sql Aad Security Configure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/aad-security-configure-tutorial.md
See the following articles for examples of connecting to SQL Managed Instance:
![Screenshot of the Results tab in the S S M S Object Explorer showing the name, principal_id, sid, type, and type_desc of the newly added login.](./media/aad-security-configure-tutorial/native-login.png)
-For more information, see [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current).
+For more information, see [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true).
## Grant permissions to create logins
Once the Azure AD server principal (login) has been created, and provided with `
GO ```
-1. Create a database in the managed instance using the [CREATE DATABASE](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-mi-current) syntax. This database will be used to test user logins in the next section.
+1. Create a database in the managed instance using the [CREATE DATABASE](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-mi-current&preserve-view=true) syntax. This database will be used to test user logins in the next section.
1. In **Object Explorer**, right-click the server and choose **New Query**. 1. In the query window, use the following syntax to create a database named **MyMITestDB**.
For more information on granting database permissions, see [Getting Started with
> [!IMPORTANT] > When creating a **USER** from an Azure AD server principal (login), specify the user_name as the same login_name from **LOGIN**.
- For more information, see [CREATE USER](/sql/t-sql/statements/create-user-transact-sql?view=azuresqldb-mi-current).
+ For more information, see [CREATE USER](/sql/t-sql/statements/create-user-transact-sql?view=azuresqldb-mi-current&preserve-view=true).
1. In a new query window, create a test table using the following T-SQL command:
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
SQL Managed Instance combines the best features that are available both in Azure
| | | |No hardware purchasing and management <br>No management overhead for managing underlying infrastructure <br>Quick provisioning and service scaling <br>Automated patching and version upgrade <br>Integration with other PaaS data services |99.99% uptime SLA <br>Built-in [high availability](../database/high-availability-sla.md) <br>Data protected with [automated backups](../database/automated-backups-overview.md) <br>Customer configurable backup retention period <br>User-initiated [backups](/sql/t-sql/statements/backup-transact-sql?preserve-view=true&view=azuresqldb-mi-current) <br>[Point-in-time database restore](../database/recovery-using-backups.md#point-in-time-restore) capability | |**Security and compliance** | **Management**|
-|Isolated environment ([VNet integration](connectivity-architecture-overview.md), single tenant service, dedicated compute and storage) <br>[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)<br>[Azure Active Directory (Azure AD) authentication](../database/authentication-aad-overview.md), single sign-on support <br> <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true">Azure AD server principals (logins)</a> <br>Adheres to compliance standards same as Azure SQL Database <br>[SQL auditing](auditing-configure.md) <br>[Advanced Threat Protection](threat-detection-configure.md) |Azure Resource Manager API for automating service provisioning and scaling <br>Azure portal functionality for manual service provisioning and scaling <br>Data Migration Service
+|Isolated environment ([VNet integration](connectivity-architecture-overview.md), single tenant service, dedicated compute and storage) <br>[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)<br>[Azure Active Directory (Azure AD) authentication](../database/authentication-aad-overview.md), single sign-on support <br> [Azure AD server principals (logins)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) <br>Adheres to compliance standards same as Azure SQL Database <br>[SQL auditing](auditing-configure.md) <br>[Advanced Threat Protection](threat-detection-configure.md) |Azure Resource Manager API for automating service provisioning and scaling <br>Azure portal functionality for manual service provisioning and scaling <br>Data Migration Service
> [!IMPORTANT] > Azure SQL Managed Instance has been certified against a number of compliance standards. For more information, see the [Microsoft Azure Compliance Offerings](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3?command=Download&downloadType=Document&downloadId=44bbae63-bf4d-4e3b-9d3d-c96fb25ec363&tab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb&docTab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb_FAQ_and_White_Papers), where you can find the most current list of SQL Managed Instance compliance certifications, listed under **SQL Database**.
Migration of an encrypted database to SQL Managed Instance is supported via Azur
SQL Managed Instance supports traditional SQL Server database engine logins and logins integrated with Azure AD. Azure AD server principals (logins) (**public preview**) are an Azure cloud version of on-premises database logins that you are using in your on-premises environment. Azure AD server principals (logins) enable you to specify users and groups from your Azure AD tenant as true instance-scoped principals, capable of performing any instance-level operation, including cross-database queries within the same managed instance.
-A new syntax is introduced to create Azure AD server principals (logins), **FROM EXTERNAL PROVIDER**. For more information on the syntax, see <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true">CREATE LOGIN</a>, and review the [Provision an Azure Active Directory administrator for SQL Managed Instance](../database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) article.
+A new syntax is introduced to create Azure AD server principals (logins), **FROM EXTERNAL PROVIDER**. For more information on the syntax, see [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true), and review the [Provision an Azure Active Directory administrator for SQL Managed Instance](../database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) article.
### Azure Active Directory integration and multi-factor authentication
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
SQL Managed Instance can't access files, so cryptographic providers can't be cre
### Logins and users - SQL logins created by using `FROM CERTIFICATE`, `FROM ASYMMETRIC KEY`, and `FROM SID` are supported. See [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql).-- Azure Active Directory (Azure AD) server principals (logins) created with the [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current) syntax or the [CREATE USER FROM LOGIN [Azure AD Login]](/sql/t-sql/statements/create-user-transact-sql?view=azuresqldb-mi-current) syntax are supported. These logins are created at the server level.
+- Azure Active Directory (Azure AD) server principals (logins) created with the [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) syntax or the [CREATE USER FROM LOGIN [Azure AD Login]](/sql/t-sql/statements/create-user-transact-sql?view=azuresqldb-mi-current&preserve-view=true) syntax are supported. These logins are created at the server level.
SQL Managed Instance supports Azure AD database principals with the syntax `CREATE USER [AADUser/AAD group] FROM EXTERNAL PROVIDER`. This feature is also known as Azure AD contained database users.
System databases are not replicated to the secondary instance in a failover grou
### TEMPDB - The maximum file size of `tempdb` can't be greater than 24 GB per core on a General Purpose tier. The maximum `tempdb` size on a Business Critical tier is limited by the SQL Managed Instance storage size. `Tempdb` log file size is limited to 120 GB on General Purpose tier. Some queries might return an error if they need more than 24 GB per core in `tempdb` or if they produce more than 120 GB of log data. - `Tempdb` is always split into 12 data files: 1 primary, also called master, data file and 11 non-primary data files. The file structure cannot be changed and new files cannot be added to `tempdb`. -- [Memory-optimized `tempdb` metadata](/sql/relational-databases/databases/tempdb-database?view=sql-server-ver15#memory-optimized-tempdb-metadata), a new SQL Server 2019 in-memory database feature, is not supported.
+- [Memory-optimized `tempdb` metadata](/sql/relational-databases/databases/tempdb-database?view=sql-server-ver15&preserve-view=true#memory-optimized-tempdb-metadata), a new SQL Server 2019 in-memory database feature, is not supported.
- Objects created in the model database cannot be auto-created in `tempdb` after a restart or a failover because `tempdb` does not get its initial object list from the model database. You must create objects in `tempdb` manually after each restart or a failover. ### MSDB
The following MSDB schemas in SQL Managed Instance must be owned by their respec
- General roles - TargetServersRole-- [Fixed database roles](/sql/ssms/agent/sql-server-agent-fixed-database-roles?view=sql-server-ver15)
+- [Fixed database roles](/sql/ssms/agent/sql-server-agent-fixed-database-roles?view=sql-server-ver15&preserve-view=true)
- SQLAgentUserRole - SQLAgentReaderRole - SQLAgentOperatorRole-- [DatabaseMail roles](/sql/relational-databases/database-mail/database-mail-configuration-objects?view=sql-server-ver15#DBProfile):
+- [DatabaseMail roles](/sql/relational-databases/database-mail/database-mail-configuration-objects?view=sql-server-ver15&preserve-view=true#DBProfile):
- DatabaseMailUserRole-- [Integration services roles](/sql/integration-services/security/integration-services-roles-ssis-service?view=sql-server-ver15):
+- [Integration services roles](/sql/integration-services/security/integration-services-roles-ssis-service?view=sql-server-ver15&preserve-view=true):
- db_ssisadmin - db_ssisltduser - db_ssisoperator
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
For more assistance with completing this migration scenario, see the following r
| | | | [Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
-The Data SQL Engineering team developed this resource. The team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
For additional assistance, see the following resources, which were developed in
|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
For more assistance with completing this migration scenario, see the following r
| | | | [Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
-The Data SQL Engineering team developed this resource. The team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
For additional assistance with completing this migration scenario, please see th
| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. | | [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server base. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
For additional assistance, see the following resources that were developed for r
|[PerfMon data collection automation using Logman](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Perfmon%20Data%20Collection%20Automation%20Using%20Logman)|A tool that collects PerMon data to understand baseline performance and assists in migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server| |[Whitepaper - Database migration to Azure SQL DB using BACPAC](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Database%20migrations%20-%20Benchmarks%20and%20Steps%20to%20Import%20to%20Azure%20SQL%20DB%20Single%20Database%20from%20BACPAC.pdf)|This whitepaper provides guidance and steps to help accelerate migrations from SQL Server to Azure SQL Database using BACPAC files.|
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
For additional assistance, see the following resources, which were developed in
|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
-
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
For additional assistance with completing this migration scenario, please see th
| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. | | [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server base. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
For additional assistance, see the following resources that were developed for r
|[Perfmon data collection automation using Logman](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Perfmon%20Data%20Collection%20Automation%20Using%20Logman)|A tool that collects Perform data to understand baseline performance that assists in the migration target recommendation. This tool that uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server.| |[Whitepaper - Database migration to Azure SQL Managed Instance by restoring full and differential backups](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Database%20migrations%20to%20Azure%20SQL%20DB%20Managed%20Instance%20-%20%20Restore%20with%20Full%20and%20Differential%20backups.pdf)|This whitepaper provides guidance and steps to help accelerate migrations from SQL Server to Azure SQL Managed Instance if you only have full and differential backups (and no log backup capability).|
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
For additional assistance, see the following resources, which were developed in
|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
For additional assistance with completing this migration scenario, please see th
| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. | | [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
For additional assistance, see the following resources that were developed for r
|[Multiple-SQL-VM-VNet-ILB](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/ARM%20Templates/Multiple-SQL-VM-VNet-ILB)|This whitepaper outlines the steps to setup multiple Azure virtual machines in a SQL Server Always On Availability Group configuration.| |[Azure virtual machines supporting Ultra SSD per Region](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Find%20Azure%20VMs%20supporting%20Ultra%20SSD)|These PowerShell scripts provide a programmatic option to retrieve the list of regions that support Azure virtual machines supporting Ultra SSDs.|
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps
azure-sql Performance Improve Use Batching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/performance-improve-use-batching.md
In this article, we want to examine various batching strategies and scenarios. A
* The multitenant characteristics of Azure SQL Database and Azure SQL Managed Instance means that the efficiency of the data access layer correlates to the overall scalability of the database. In response to usage in excess of predefined quotas, Azure SQL Database and Azure SQL Managed Instance can reduce throughput or respond with throttling exceptions. Efficiencies, such as batching, enable you to do more work before reaching these limits. * Batching is also effective for architectures that use multiple databases (sharding). The efficiency of your interaction with each database unit is still a key factor in your overall scalability.
-One of the benefits of using Azure SQL Database or Azure SQL Managed Instance is that you donΓÇÖt have to manage the servers that host the database. However, this managed infrastructure also means that you have to think differently about database optimizations. You can no longer look to improve the database hardware or network infrastructure. Microsoft Azure controls those environments. The main area that you can control is how your application interacts with Azure SQL Database and Azure SQL Managed Instance. Batching is one of these optimizations.
+One of the benefits of using Azure SQL Database or Azure SQL Managed Instance is that you don't have to manage the servers that host the database. However, this managed infrastructure also means that you have to think differently about database optimizations. You can no longer look to improve the database hardware or network infrastructure. Microsoft Azure controls those environments. The main area that you can control is how your application interacts with Azure SQL Database and Azure SQL Managed Instance. Batching is one of these optimizations.
The first part of this article examines various batching techniques for .NET applications that use Azure SQL Database or Azure SQL Managed Instance. The last two sections cover batching guidelines and scenarios.
using (SqlConnection connection = new SqlConnection(CloudConfigurationManager.Ge
} ```
-Transactions are actually being used in both of these examples. In the first example, each individual call is an implicit transaction. In the second example, an explicit transaction wraps all of the calls. Per the documentation for the [write-ahead transaction log](/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver15#WAL), log records are flushed to the disk when the transaction commits. So by including more calls in a transaction, the write to the transaction log can delay until the transaction is committed. In effect, you are enabling batching for the writes to the serverΓÇÖs transaction log.
+Transactions are actually being used in both of these examples. In the first example, each individual call is an implicit transaction. In the second example, an explicit transaction wraps all of the calls. Per the documentation for the [write-ahead transaction log](/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver15&preserve-view=true#WAL), log records are flushed to the disk when the transaction commits. So by including more calls in a transaction, the write to the transaction log can delay until the transaction is committed. In effect, you are enabling batching for the writes to the server's transaction log.
The following table shows some ad hoc testing results. The tests performed the same sequential inserts with and without transactions. For more perspective, the first set of tests ran remotely from a laptop to the database in Microsoft Azure. The second set of tests ran from a cloud service and database that both resided within the same Microsoft Azure datacenter (West US). The following table shows the duration in milliseconds of sequential inserts with and without transactions.
SqlCommand cmd = new SqlCommand("sp_InsertRows", connection);
cmd.CommandType = CommandType.StoredProcedure; ```
-In most cases, table-valued parameters have equivalent or better performance than other batching techniques. Table-valued parameters are often preferable, because they are more flexible than other options. For example, other techniques, such as SQL bulk copy, only permit the insertion of new rows. But with table-valued parameters, you can use logic in the stored procedure to determine which rows are updates and which are inserts. The table type can also be modified to contain an ΓÇ£OperationΓÇ¥ column that indicates whether the specified row should be inserted, updated, or deleted.
+In most cases, table-valued parameters have equivalent or better performance than other batching techniques. Table-valued parameters are often preferable, because they are more flexible than other options. For example, other techniques, such as SQL bulk copy, only permit the insertion of new rows. But with table-valued parameters, you can use logic in the stored procedure to determine which rows are updates and which are inserts. The table type can also be modified to contain an "Operation" column that indicates whether the specified row should be inserted, updated, or deleted.
The following table shows ad hoc test results for the use of table-valued parameters in milliseconds.
If you do use parallel execution, consider controlling the maximum number of wor
Typical guidance on database performance also affects batching. For example, insert performance is reduced for tables that have a large primary key or many nonclustered indexes.
-If table-valued parameters use a stored procedure, you can use the command **SET NOCOUNT ON** at the beginning of the procedure. This statement suppresses the return of the count of the affected rows in the procedure. However, in our tests, the use of **SET NOCOUNT ON** either had no effect or decreased performance. The test stored procedure was simple with a single **INSERT** command from the table-valued parameter. It is possible that more complex stored procedures would benefit from this statement. But donΓÇÖt assume that adding **SET NOCOUNT ON** to your stored procedure automatically improves performance. To understand the effect, test your stored procedure with and without the **SET NOCOUNT ON** statement.
+If table-valued parameters use a stored procedure, you can use the command **SET NOCOUNT ON** at the beginning of the procedure. This statement suppresses the return of the count of the affected rows in the procedure. However, in our tests, the use of **SET NOCOUNT ON** either had no effect or decreased performance. The test stored procedure was simple with a single **INSERT** command from the table-valued parameter. It is possible that more complex stored procedures would benefit from this statement. But don't assume that adding **SET NOCOUNT ON** to your stored procedure automatically improves performance. To understand the effect, test your stored procedure with and without the **SET NOCOUNT ON** statement.
## Batching scenarios
-The following sections describe how to use table-valued parameters in three application scenarios. The first scenario shows how buffering and batching can work together. The second scenario improves performance by performing master-detail operations in a single stored procedure call. The final scenario shows how to use table-valued parameters in an ΓÇ£UPSERTΓÇ¥ operation.
+The following sections describe how to use table-valued parameters in three application scenarios. The first scenario shows how buffering and batching can work together. The second scenario improves performance by performing master-detail operations in a single stored procedure call. The final scenario shows how to use table-valued parameters in an "UPSERT" operation.
### Buffering Although there are some scenarios that are obvious candidate for batching, there are many scenarios that could take advantage of batching by delayed processing. However, delayed processing also carries a greater risk that the data is lost in the event of an unexpected failure. It is important to understand this risk and consider the consequences.
-For example, consider a web application that tracks the navigation history of each user. On each page request, the application could make a database call to record the userΓÇÖs page view. But higher performance and scalability can be achieved by buffering the usersΓÇÖ navigation activities and then sending this data to the database in batches. You can trigger the database update by elapsed time and/or buffer size. For example, a rule could specify that the batch should be processed after 20 seconds or when the buffer reaches 1000 items.
+For example, consider a web application that tracks the navigation history of each user. On each page request, the application could make a database call to record the user's page view. But higher performance and scalability can be achieved by buffering the users' navigation activities and then sending this data to the database in batches. You can trigger the database update by elapsed time and/or buffer size. For example, a rule could specify that the batch should be processed after 20 seconds or when the buffer reaches 1000 items.
The following code example uses [Reactive Extensions - Rx](/previous-versions/dotnet/reactive-extensions/hh242985(v=vs.103)) to process buffered events raised by a monitoring class. When the buffer fills or a timeout is reached, the batch of user data is sent to the database with a table-valued parameter.
To use this buffering class, the application creates a static NavHistoryDataMoni
### Master detail
-Table-valued parameters are useful for simple INSERT scenarios. However, it can be more challenging to batch inserts that involve more than one table. The ΓÇ£master/detailΓÇ¥ scenario is a good example. The master table identifies the primary entity. One or more detail tables store more data about the entity. In this scenario, foreign key relationships enforce the relationship of details to a unique master entity. Consider a simplified version of a PurchaseOrder table and its associated OrderDetail table. The following Transact-SQL creates the PurchaseOrder table with four columns: OrderID, OrderDate, CustomerID, and Status.
+Table-valued parameters are useful for simple INSERT scenarios. However, it can be more challenging to batch inserts that involve more than one table. The "master/detail" scenario is a good example. The master table identifies the primary entity. One or more detail tables store more data about the entity. In this scenario, foreign key relationships enforce the relationship of details to a unique master entity. Consider a simplified version of a PurchaseOrder table and its associated OrderDetail table. The following Transact-SQL creates the PurchaseOrder table with four columns: OrderID, OrderDate, CustomerID, and Status.
```sql CREATE TABLE [dbo].[PurchaseOrder](
This example demonstrates that even more complex database operations, such as ma
### UPSERT
-Another batching scenario involves simultaneously updating existing rows and inserting new rows. This operation is sometimes referred to as an ΓÇ£UPSERTΓÇ¥ (update + insert) operation. Rather than making separate calls to INSERT and UPDATE, the MERGE statement is best suited to this task. The MERGE statement can perform both insert and update operations in a single call.
+Another batching scenario involves simultaneously updating existing rows and inserting new rows. This operation is sometimes referred to as an "UPSERT" (update + insert) operation. Rather than making separate calls to INSERT and UPDATE, the MERGE statement is best suited to this task. The MERGE statement can perform both insert and update operations in a single call.
Table-valued parameters can be used with the MERGE statement to perform updates and inserts. For example, consider a simplified Employee table that contains the following columns: EmployeeID, FirstName, LastName, SocialSecurityNumber:
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-pop-locations.md
This article lists current Metros containing point-of-presence (POP) locations,
## Next steps
-* To get the latest IP addresses for allow listing, see the [Azure CDN Edge Nodes API](/rest/api/cdn/edgenodes).
+
+* To get the latest IP addresses for allow listing, see the [Azure CDN Edge Nodes API](/rest/api/cdn/cdn/edgenodes).
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/12/2021 Last updated : 3/28/2021
The following tables show the Microsoft Security Response Center (MSRC) updates
## March 2021 Guest OS
->[!NOTE]
-
->The March Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the March Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 21-03 | [5000822] | Latest Cumulative Update(LCU) | 6.29 | Mar 9, 2021 |
-| Rel 21-03 | [4580325] | Flash update | 3.95, 4.88, 5.53, 6.29 | Oct 13, 2020 |
-| Rel 21-03 | [5000800] | IE Cumulative Updates | 2.108, 3.95, 4.88 | Mar 9, 2021 |
-| Rel 21-03 | [5000803] | Latest Cumulative Update(LCU) | 5.53 | Mar 9, 2021 |
-| Rel 21-03 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.108 | Oct 13, 2020 |
-| Rel 21-03 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.108 | Oct 13, 2020 |
-| Rel 21-03 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.88 | Oct 13, 2020 |
-| Rel 21-03 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.88 | Oct 13, 2020 |
-| Rel 21-03 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.95 | Oct 13, 2020 |
-| Rel 21-03 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.95 | Oct 13, 2020 |
-| Rel 21-03 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.29 | Feb 9, 2021 |
-| Rel 21-03 | [5000841] | Monthly Rollup  | 2.108 | Mar 9, 2021 |
-| Rel 21-03 | [5000847] | Monthly Rollup  | 3.95 | Mar 9, 2021 |
-| Rel 21-03 | [5000848] | Monthly Rollup  | 4.88 | Mar 9, 2021 |
-| Rel 21-03 | [4566426] | Servicing Stack update  | 3.95 | July 14, 2020 |
-| Rel 21-03 | [4566425] | Servicing Stack update  | 4.88 | July 14, 2020 |
-| Rel 21-03 OOB | [4578013] | Standalone Security Update  | 4.88 | Aug 19, 2020 |
-| Rel 21-03 | [4592510] | Servicing Stack update  | 2.108 | Dec 8, 2020 |
-| Rel 21-03 | [5000859] | Servicing Stack update  | 6.29 | Mar 9, 2021 |
-| Rel 21-03 | [4494175] | Microcode  | 5.53 | Sep 1, 2020 |
-| Rel 21-03 | [4494174] | Microcode  | 6.29 | Sep 1, 2020 |
+| Rel 21-03 | [5000822] | Latest Cumulative Update(LCU) | [6.29] | Mar 9, 2021 |
+| Rel 21-03 | [4580325] | Flash update | [3.95], [4.88], [5.53], [6.29] | Oct 13, 2020 |
+| Rel 21-03 | [5000800] | IE Cumulative Updates | [2.108], [3.95], [4.88] | Mar 9, 2021 |
+| Rel 21-03 | [5000803] | Latest Cumulative Update(LCU) | [5.53] | Mar 9, 2021 |
+| Rel 21-03 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | [2.108] | Oct 13, 2020 |
+| Rel 21-03 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | [2.108] | Oct 13, 2020 |
+| Rel 21-03 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | [4.88] | Oct 13, 2020 |
+| Rel 21-03 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | [4.88] | Oct 13, 2020 |
+| Rel 21-03 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | [3.95] | Oct 13, 2020 |
+| Rel 21-03 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | [3.95] | Oct 13, 2020 |
+| Rel 21-03 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | [6.29] | Feb 9, 2021 |
+| Rel 21-03 | [5000841] | Monthly Rollup  | [2.108] | Mar 9, 2021 |
+| Rel 21-03 | [5000847] | Monthly Rollup  | [3.95] | Mar 9, 2021 |
+| Rel 21-03 | [5000848] | Monthly Rollup  | [4.88] | Mar 9, 2021 |
+| Rel 21-03 | [4566426] | Servicing Stack update  | [3.95] | July 14, 2020 |
+| Rel 21-03 | [4566425] | Servicing Stack update  | [4.88] | July 14, 2020 |
+| Rel 21-03 OOB | [4578013] | Standalone Security Update  | [4.88] | Aug 19, 2020 |
+| Rel 21-03 | [4592510] | Servicing Stack update  | [2.108] | Dec 8, 2020 |
+| Rel 21-03 | [5000859] | Servicing Stack update  | [6.29] | Mar 9, 2021 |
+| Rel 21-03 | [4494175] | Microcode  | [5.53] | Sep 1, 2020 |
+| Rel 21-03 | [4494174] | Microcode  | [6.29] | Sep 1, 2020 |
[5000822]: https://support.microsoft.com/kb/5000822 [4580325]: https://support.microsoft.com/kb/4580325
The following tables show the Microsoft Security Response Center (MSRC) updates
[5000859]: https://support.microsoft.com/kb/5000859 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174-
+[2.108]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.95]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.88]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.53]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.29]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## February 2021 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 2/19/2021 Last updated : 3/28/2021 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **March 28, 2021**
+The March Guest OS has released.
+ ###### **February 19, 2021** The February Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.29_202103-01 | March 28, 2021 | Post 6.31 |
| WA-GUEST-OS-6.28_202102-01 | February 19, 2021 | Post 6.30 |
-| WA-GUEST-OS-6.27_202101-01 | February 5, 2021 | Post 6.29 |
+|~~WA-GUEST-OS-6.27_202101-01~~| February 5, 2021 | March 28, 2021 |
|~~WA-GUEST-OS-6.26_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-6.25_202011-01~~| December 19, 2020 | February 5, 2021 | |~~WA-GUEST-OS-6.24_202010-02~~| November 17, 2020 | January 15, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.53_202103-01 | March 28, 2021 | Post 5.55 |
| WA-GUEST-OS-5.52_202102-01 | February 19, 2021 | Post 5.54 |
-| WA-GUEST-OS-5.51_202101-01 | February 5, 2021 | Post 5.53 |
+|~~WA-GUEST-OS-5.51_202101-01~~| February 5, 2021 | March 28, 2021 |
|~~WA-GUEST-OS-5.50_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-5.49_202011-01~~| December 19, 2020 | February 5, 2021 | |~~WA-GUEST-OS-5.48_202010-02~~| November 17, 2020 | January 15, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.88_202103-01 | March 28, 2021 | Post 4.90 |
| WA-GUEST-OS-4.87_202102-01 | February 19, 2021 | Post 4.89 |
-| WA-GUEST-OS-4.86_202101-01 | February 5, 2021 | Post 4.88 |
+|~~WA-GUEST-OS-4.86_202101-01~~| February 5, 2021 | March 28, 2021 |
|~~WA-GUEST-OS-4.85_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-4.84_202011-01~~| December 19, 2020 | February 5, 2021 | |~~WA-GUEST-OS-4.83_202010-02~~| November 17, 2020 | January 15, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.95_202103-01 | March 28, 2021 | Post 3.97 |
| WA-GUEST-OS-3.94_202102-01 | February 19, 2021 | Post 3.96 |
-| WA-GUEST-OS-3.93_202101-01 | February 5, 2021 | Post 3.95 |
+|~~WA-GUEST-OS-3.93_202101-01~~| February 5, 2021 | March 28, 2021 |
|~~WA-GUEST-OS-3.92_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-3.91_202011-01~~| December 19, 2020 | February 5, 2021 | |~~WA-GUEST-OS-3.90_202010-02~~| November 17, 2020 | January 15, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.108_202103-01 | March 28, 2021 | Post 2.110 |
| WA-GUEST-OS-2.107_202102-01 | February 19, 2021 | Post 2.109 |
-| WA-GUEST-OS-2.106_202101-01 | February 5, 2021 | Post 2.108 |
+|~~WA-GUEST-OS-2.106_202101-01~~| February 5, 2021 | March 28, 2021 |
|~~WA-GUEST-OS-2.105_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-2.104_202011-01~~| December 19, 2020 | February 5, 2021 | |~~WA-GUEST-OS-2.103_202010-02~~| November 17, 2020 | January 15, 2021 |
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/conversation-transcription.md
Previously updated : 03/16/2020 Last updated : 03/26/2021
-# What is Conversation Transcription in meetings (Preview)?
+# What is Conversation Transcription (Preview)?
Conversation Transcription is a [speech-to-text](speech-to-text.md) solution that combines speech recognition, speaker identification, and sentence attribution to each speaker (also known as _diarization_) to provide real-time and/or asynchronous transcription of any conversation. Conversation Transcription distinguishes speakers in a conversation to determine who said what and when, and makes it easy for developers to add speech-to-text to their applications that perform multi-speaker diarization.
Currently, Conversation Transcription supports [all speech-to-text languages](la
## Next steps > [!div class="nextstepaction"]
-> [Transcribe conversations in real time](how-to-use-conversation-transcription.md)
+> [Transcribe conversations in real time](how-to-use-conversation-transcription.md)
cognitive-services How To Migrate From Bing Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md
This article outlines the differences between the Bing Speech APIs and the Speec
A single Speech service subscription key grants access to the following features. Each is metered separately, so you're charged only for the features you use. * [Speech-to-text](speech-to-text.md)
-* [Custom speech-to-text](https://cris.ai)
+* [Custom speech-to-text](/azure/cognitive-services/speech-service/custom-speech-overview)
* [Text-to-speech](text-to-speech.md) * [Custom text-to-speech voices](./how-to-custom-voice-create-voice.md) * [Speech translation](speech-translation.md) (does not include [Text translation](../translator/translator-info-overview.md))
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial pose events > [!NOTE]
-> Viseme only works for `en-US-AriaNeural` voice in West US 2 (`westus2`) region for now.
+> Viseme only works for `en-US-AriaNeural` voice for now.
A viseme is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when speaking a word.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Previously updated : 03/15/2021 Last updated : 03/27/2021
For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API f
#### Batch Transcription | Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
-| REST API limit | Batch transcription is not available for F0 | 300 requests per minute |
+| [Speech-to-text REST API V2.0 and v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) limit | Batch transcription is not available for F0 | 300 requests per minute |
| Max audio input file size | N/A | 1 GB | | Max input blob size (may contain more than one file, for example, in a zip archive; ensure to note the file size limit above) | N/A | 2.5 GB | | Max blob container size | N/A | 5 GB |
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
We will not read out the bookmark elements.
The bookmark element can be used to reference a specific location in the text or tag sequence. > [!NOTE]
-> `bookmark` element only works for `en-US-AriaNeural` voice in West US 2 (`westus2`) region for now.
+> `bookmark` element only works for `en-US-AriaNeural` voice for now.
**Syntax**
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
In this overview, you learn about the benefits and capabilities of the text-to-s
* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service. > [!NOTE]
-> Viseme only works for `en-US-AriaNeural` voice in West US 2 (`westus2`) region for now.
+> Viseme only works for `en-US-AriaNeural` voice for now.
## Get started
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/overview.md
As with all the cognitive services, developers using the Form Recognizer service
Try our online tool and quickstart to learn more about the Form Recognizer service.
-* [**Form Recognizer tool**](https://fott-preview.microsoft.com/)
+* [**Form Recognizer tool**](https://fott-preview.azurewebsites.net/)
* [**Client library and REST API quickstart**](quickstarts/client-library.md)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
See the [Communication Services Chat SDK Overview](./sdk-features.md) to learn m
## Chat overview
-Chat conversations happen within chat threads. A chat thread can contain many messages and many users. Every message belongs to a single thread, and a user can be a part of one or many threads. Each user in the chat thread is called a participant. Only thread participants can send and receive messages and add or remove other users in a chat thread. Communication Services stores chat history until you execute a delete operation on the chat thread or message, or until no participants are remaining in the chat thread, at which point, the chat thread is orphaned and queued for deletion.
-
-## Service limits
+Chat conversations happen within **chat threads**. Chat threads have the following properties:
+
+- A chat thread is uniquely identified by its `ChatThreadId`.
+- Chat threads can have one or many users as participants who can send messages to it.
+- A user can be a part of one or many chat threads.
+- Only the thread participants have access to a given chat thread, and only they can perform chat thread operations. These operations include sending and receiving messages, adding participants, and removing participants.
+- Users are automatically added as a participant to any chat threads that they create.
+
+### User access
+Typically the thread creator and participants have same level of access to the thread and can execute all related operations available in the SDK, including deleting it. Participants don't have write access to messages sent by other participants, which means only the message sender can update or delete their sent messages. If another participant tries to do that, they'll get an error.
+
+If you want to limit access to chat features for a set of users, you can configure access as part of your trusted service. Your trusted service is the service that orchestrates the authentication and authorization of chat participants. We'll explore this in further detail below.
+### Chat Data
+Communication Services stores chat history until explicitly deleted. Chat thread participants can use `ListMessages` to view message history for a particular thread. Users removed from a chat thread will be able to view previous message history, but they won't be able to send or receive new messages as part of that chat thread. A fully idle thread with no participants will be automatically deleted after 30 days. To learn more about data being stored by Communication Services, refer to documentation on [privacy](../privacy.md).
+
+### Service limits
- The maximum number of participants allowed in a chat thread is 250. - The maximum message size allowed is approximately 28 KB. - For chat threads with more than 20 participants, read receipts and typing indicator features aren't supported. -- + ## Chat architecture There are two core parts to chat architecture: 1) Trusted Service and 2) Client Application. :::image type="content" source="../../media/chat-architecture.png" alt-text="Diagram showing Communication Services' chat architecture.":::
-We recommend generating access tokens using the trusted service tier. In this scenario the server side would be responsible for creating and managing users and issuing their tokens.
+ - **Trusted service:** To properly manage a chat session, you need a service that helps you connect to Communication Services by using your resource connection string. This service is responsible for creating chat threads, adding and removing participants, and issuing access tokens to users. More information about access tokens can be found in our [access tokens](../../quickstarts/access-tokens.md) quickstart.
+ - **Client app:** The client application connects to your trusted service and receives the access tokens that are used by users to connect directly to Communication Services. Once your trusted service has created the chat thread and added users as participants, they can use the client app to connect to the chat thread and send messages. Use real time notifications feature, which we will discuss below, in your client app to subscribe to message & thread updates from other participants.
+
## Message types
-Communication Services Chat shares user-generated messages as well as system-generated messages called **Thread activities**. Thread activities are generated when a chat thread is updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain the user-generated text messages as well as the system messages in chronological order. This helps you identify when a participant was added or removed or when the chat thread topic was updated. Supported message types are:
-
--
-```
-{
- "id": "1613589626560",
- "type": "participantAdded",
- "sequenceId": "7",
- "version": "1613589626560",
- "content":
- {
- "participants":
- [
- {
- "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
- "displayName": "Jane",
- "shareHistoryTime": "1970-01-01T00:00:00Z"
- }
- ],
- "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
- },
- "createdOn": "2021-02-17T19:20:26Z"
- }
-```
--- `ThreadActivity/ParticipantRemoved`: System message that indicates a participant has been removed from the chat thread. For example: -
-```
-{
- "id": "1613589627603",
- "type": "participantRemoved",
- "sequenceId": "8",
- "version": "1613589627603",
- "content":
- {
- "participants":
- [
- {
- "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
- "displayName": "Jane",
- "shareHistoryTime": "1970-01-01T00:00:00Z"
- }
- ],
- "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
- },
- "createdOn": "2021-02-17T19:20:27Z"
- }
-```
--- `ThreadActivity/TopicUpdate`: System message that indicates the thread topic has been updated. For example:
-```
-{
- "id": "1613589623037",
- "type": "topicUpdated",
- "sequenceId": "2",
- "version": "1613589623037",
- "content":
- {
- "topic": "New topic",
- "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
- },
- "createdOn": "2021-02-17T19:20:23Z"
- }
-```
-
-## Real-time signaling
-
-The Chat JavaScript SDK includes real-time signaling. This allows clients to listen for real-time updates and incoming messages to a chat thread without having to poll the APIs. Available events include:
---
-## Chat events
-
-Real-time signaling allows your users to chat in real-time. Your services can use Azure Event Grid to subscribe to chat-related events. For more details, see [Event Handling conceptual](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services?tabs=event-grid-event-schema).
--
-## Using Cognitive Services with Chat SDK to enable intelligent features
-
-You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat SDK to add intelligent features to your applications. For example, you can:
+As part of message history, Chat shares user-generated messages as well as system-generated messages. System messages are generated when a chat thread is updated and can help identify when a participant was added or removed or when the chat thread topic was updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain both kind of messages in chronological order.
+
+For user generated messages, the message type can be set in `SendMessageOptions` when sending a message to chat thread. If no value is provided, Communication Services will default to `text` type. Setting this value is important when sending HTML. When `html` is specified, Communication Services will sanitize the content to ensure that it's rendered safely on client devices.
+ - `text`: A plain text message composed and sent by a user as part of a chat thread.
+ - `html`: A formatted message using html, composed and sent by a user as part of chat thread.
+
+Types of system messages:
+ - `participantAdded`: System message that indicates one or more participants have been added to the chat thread.
+ - `participantRemoved`: System message that indicates a participant has been removed from the chat thread.
+ - `topicUpdated`: System message that indicates the thread topic has been updated.
+
+## Real-time notifications
+
+Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This feature lets clients listen to Communication Services for real-time updates and incoming messages to a chat thread without having to poll the APIs. The client app can subscribe to following events:
+ - `chatMessageReceived` - when a new message is sent to a chat thread by a participant.
+ - `chatMessageEdited` - when a message is edited in a chat thread.
+ - `chatMessageDeleted` - when a message is deleted in a chat thread.
+ - `typingIndicatorReceived` - when another participant sends a typing indicator to the chat thread.
+ - `readReceiptReceived` - when another participant sends a read receipt for a message they have read.
+ - `chatThreadCreated` - when a chat thread is created by a Communication Services user.
+ - `chatThreadDeleted` - when a chat thread is deleted by a Communication Services user.
+ - `chatThreadPropertiesUpdated` - when chat thread properties are updated; currently, only updating the topic for the thread is supported.
+ - `participantsAdded` - when a user is added as a chat thread participant.
+ - `participantsRemoved` - when an existing participant is removed from the chat thread.
+
+Real-time notifications can be used to provide a real-time chat experience for your users. To send push notifications for messages missed by your users while they were away, Communication Services integrates with Azure Event Grid to publish chat related events (post operation) which can be plugged into your custom app notification service. For more details, see [Server Events](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcommunication-services%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json).
++
+## Build intelligent, AI powered chat experiences
+
+You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat SDK to build use cases like:
- Enable users to chat with each other in different languages. -- Help a support agent prioritize tickets by detecting a negative sentiment of an incoming issue from a customer.
+- Help a support agent prioritize tickets by detecting a negative sentiment of an incoming message from a customer.
- Analyze the incoming messages for key detection and entity recognition, and prompt relevant info to the user in your app based on the message content. One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other participants [1], calling cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
This way, the message history will contain both original and translated messages
> [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you: -- Familiarize yourself with the [Chat SDK](sdk-features.md)
+- Familiarize yourself with the [Chat SDK](sdk-features.md)
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
The following list presents the set of features which are currently available in
| Group of features | Capability | JavaScript | Java | .NET | Python | iOS | Android | |--|-||--|-|--|-|-|
-| Core Capabilities | Create a chat thread between 2 or more users (up to 250 users) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Core Capabilities | Create a chat thread between 2 or more users | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
| | Update the topic of a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Add or remove participants from a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Choose whether to share chat message history with the participant being added | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features which are currently available in
| | Given a communication user, get the list of chat threads the user is part of | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Get info for a particular chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Send and receive messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Edit the contents of a sent message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Delete a message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Read receipts for messages that have been read by other participants in a chat <br/> *Not available when there are more than 20 participants in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get notified when participants are actively typing a message in a chat thread <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get all messages in a chat thread <br/> | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Update the content of your sent message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Delete a message you previously sent | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Read receipts for messages that have been read by other participants in a chat | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get notified when participants are actively typing a message in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get all messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
| | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-|Real-time signaling (enabled by proprietary signaling package**)| Subscribe to get real-time updates for incoming messages and other operations in your chat app. To see a list of supported updates for real-time signaling, see [Chat concepts](concepts.md#real-time-signaling) | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Event Grid support | Use integration with Azure Event Grid and configure your communication service to execute business logic based on chat activity or to plug in a custom push notification service | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Monitoring | Use the API request metrics emitted in the Azure portal to build dashboards, monitor the health of your chat app, and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Configure your Communication Services resource to receive chat operational logs for monitoring and diagnostic purposes | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+|Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Integration with Azure Event Grid | Use the chat events available in Azure Event Grid to plug custom notification services or post that event to a webhook to execute business logic like updating CRM records after a chat is finished | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Reporting </br>(This info is available under Monitoring tab for your Communication Services resource on Azure portal) | Understand API traffic from your chat app by monitoring the published metrics in Azure Metrics Explorer and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Monitor and debug your Communication Services solution by enabling diagnostic logging for your resource | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
**The proprietary signaling package is implemented using web sockets. It will fallback to long polling if web sockets are unsupported.
The following table represents the set of supported browsers and versions which
> [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you: -- Familiarize yourself with [chat concepts](../chat/concepts.md)
+- Familiarize yourself with [chat concepts](../chat/concepts.md)
+- Understand how [pricing](../pricing.md#chat) works for chat
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Azure Communication Services maintains a directory of phone numbers associated w
### Chat
-Chat threads and messages are retained until explicitly deleted. A fully idle thread will be automatically deleted after 30 days. Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, and delete messages.
+Chat threads and messages are retained until explicitly deleted. A fully idle thread with no participants will be automatically deleted after 30 days. Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, and delete messages.
- `Get Thread` - `Get Message`
+- `List Messages`
+- `Update Message`
- `Delete Thread` - `Delete Message`
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
# SMS concepts- [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] - [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Azure Communication Services enables you to send and receive SMS text messages using the Communication Services SMS SDKs. These SDKs can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Communication Services SMS allows you to reliably send messages while exposing deliverability and response rate insights surrounding your campaigns.
+Azure Communication Services enables you to send and receive SMS text messages using the Communication Services SMS SDKs. These SDKs can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Communication Services SMS allows you to reliably send messages while exposing deliverability and response metrics.
Key features of Azure Communication Services SMS SDKs include: - **Simple** setup experience for adding SMS capability to your applications. - **High Velocity** message support over toll free numbers for A2P (Application to Person) use cases in the United States.
+- **Bulk Messaging** supported to enable sending messages to multiple recipients at a time.
- **Two-way** conversations to support scenarios like customer support, alerts, and appointment reminders. - **Reliable Delivery** with real-time delivery reports for messages sent from your application.-- **Analytics** to track your usage patterns and customer engagement.
+- **Analytics** to track your SMS usage patterns.
- **Opt-Out** handling support to automatically detect and respect opt-outs for toll-free numbers. Opt-outs for US toll-free numbers are mandated and enforced by US carriers.
- - STOP - If a text message recipient wishes to opt-out, they can send ΓÇÿSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for STOP: *"NETWORK MSG: You replied with the word "stop" which blocks all texts sent from this number. Text back "unstop" to receive messages again."*
- - START/UNSTOP - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOP to the toll-free number. The carrier sends the following default response for START/UNSTOP: *ΓÇ£NETWORK MSG: You have replied ΓÇ£unstopΓÇ¥ and will begin receiving messages again from this number.ΓÇ¥*
- - Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥
- - The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
- ## Next steps
communication-services Messaging Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/messaging-policy.md
+
+ Title: Messaging Policy
+
+description: Learn about SMS messaging policies.
+++++ Last updated : 03/19/2021++++
+# Azure Communication Services Messaging Policy
++
+Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams and Skype. Integrate SMS messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements to get started.
+
+We know that messaging requirements can seem daunting to learn, but they're as easy as remembering ΓÇ£COMSΓÇ¥:
+
+- C - Consent
+- O - Opt-Out
+- M - Message Content
+- S - Spoofing
+
+We developed this messaging policy to help you satisfy regulatory requirements and align with recommended best practices.
++
+## Consent
+
+### What is consent?
+
+Consent is an agreement between you and the message recipient that allows you to send automated messages to them. You must obtain consent before sending the first message, and you should make clear to the recipient that they're agreeing to receive messages from you. This procedure is known as receiving ΓÇ£prior express consentΓÇ¥ from the individual you intend to message.
+
+The messages that you send must be the same type of messages that the recipient agreed to receive and should only be sent to the number that the recipient provided to you. If you intend to send informational messages, such as appointment reminders or alerts, then consent can be either written or oral. If you intend to send promotional messages, such as sales or marketing messages that promote a product or service, then consent must be written.
+
+### How do you obtain consent?
+
+Consent can be obtained in a variety of ways, such as:
+
+- when a user enters their telephone number into a website,
+- when a user initiates a text message exchange, or
+- when a user sends a sign-up keyword to your phone number.
+
+Regardless of how consent is obtained, you and your customers must ensure that the consent is unambiguous. The scope of the consent should clear to the recipient.
++
+### Consent requirements:
+
+- Provide a ΓÇ£Call to ActionΓÇ¥ before obtaining consent. You and your customers should provide potential message recipients with a ΓÇ£call to actionΓÇ¥ that invites them to opt-in to your messaging program. The call to action should include, at a minimum: (1) the identity of the message sender, (2) clear opt-in instructions, (3) opt-out instructions, and (4) any associated messaging fees.
+- Consent isn't transferable or assignable. Any consent that an individual provides to you cannot be transferred or sold to an unaffiliated third-party. If you collect an individualΓÇÖs consent for a third-party, then you must clearly identify the third-party to the individual. You must also state that the consent you obtained applies only to communications from the third-party.
+- Consent Is limited in purpose. An individual who provides their number for a particular purpose consents to receive communications only for that specific purpose and from that specific message sender. Before obtaining consent, you should clearly notify the intended message recipient if you'll send recurring messages or messages from an affiliate.
+
+### Consent best practices:
+
+In addition to the messaging requirements discussed above, you may want to implement several common best practices, including:
+
+- Detailed ΓÇ£Call to ActionΓÇ¥ information. To ensure that you obtain appropriate consent, provide
+ - the name or description of your messaging program or product
+ - the number(s) from which recipients will receive messages, and
+ - any applicable terms and conditions before an individual opts-in to receiving messages from you.
+- Accurate records of consent. You should retain records of any consent that an individual provides to you for at least four years. Records of consent can include:
+ - timestamps
+ - the medium by which consent was obtained
+ - the specific campaign for which consent was obtained
+ - screen captures
+ - the session ID or IP address of the consenting individual.
+- Privacy and security policies. Developers are encouraged to provide straightforward privacy policies that message recipients can review before their consent is obtained. We also recommend maintaining proactive security controls to safeguard individualsΓÇÖ private information.
++
+## Double opt-In consent:
+
+Azure Communication Services recommends that you use double opt-in consent for all messaging campaigns. Double opt-in consent is a two-step process where an individual first provides consent to receive certain types of messages from you. You then send a follow-up opt-in message to confirm their consent. You should send more messages only once the message recipient confirms their consent.
+
+The initial confirmation message that you send should include your identity, the option to opt-out of future messages (such as the use of a ΓÇ£STOPΓÇ¥ command), a toll-free number or ΓÇ£HELPΓÇ¥ command for additional information, notification that the individual is enrolled in a recurring message program, a brief description of the program, the frequency with which you intend to send recurring messages, and any associated fees.
+
+### Does Azure Communication Services ever require double opt-in consent?
+Yes, while double opt-in consent is always recommended, Azure Communication Services requires that you use double opt-in consent for some types of messaging campaigns due to their frequent use in phishing schemes or their tendency to result in consumer complaints. These campaigns include:
+- Auto-warranty messages
+- Short-term health insurance plans
+- Debt refinancing or interest rate reduction messages if not made by a financial institution
+- Lead generation messages
+- Sweepstakes, contests, and giveaways
+- Work-from-home offers
+
+The campaigns for which double opt-in consent are required are subject to change at the discretion of Azure Communication Services.
+
+### Exceptions to traditional consent rules:
+While prior express consent is normally required before sending a message, there are two situations in which consent to message an individual is implied.
+
+- Recipient initiates a communication. If an individual initiates a communication by sending a message to you, then you may provide relevant information in response to a specific inquiry or request contained in the message. However, the implied consent that the individual provided is limited to the conversation that the individual initiated unless you obtain consent for further communications.
+- Exemptions for specific services. There are several specific services for which you may have implied consent to initiate a message. The most common of these are:
+- package delivery messages
+- financial institution messages that concern time-sensitive topics (such as potentially fraudulent transactions or data breaches)
+- healthcare provider messages that include time-sensitive information and a treatment purpose (such as appointment or exam reminders, lab results, and prescription notifications).
+
+None of these messages may include solicitations or advertisements.
++
+## Opt-out
+
+Message recipients may revoke consent and opt-out of receiving future messages through any reasonable means. You may not designate an exclusive means for message recipients to revoke consent.
+
+### Opt-out requirements:
+
+Ensure that message recipients can opt-out of future messages at any time. You must also offer multiple opt-out options. After a message recipient opts-out, you should not send additional messages unless the individual provides renewed consent.
+
+One of the most common opt-out mechanisms is to include a ΓÇ£STOPΓÇ¥ keyword in the initial message of every new conversation. Be prepared to remove customers that reply with a lowercase ΓÇ£stopΓÇ¥ or other common keywords, such as ΓÇ£unsubscribeΓÇ¥ or ΓÇ£cancel.ΓÇ¥ After an individual revokes consent, you should remove them from all recurring messaging campaigns unless they expressly elect to continue receiving messages from a particular program.
+
+### Opt-out best practices:
+
+In addition to keywords, other common opt-out mechanisms include providing customers with a designated opt-out e-mail address, the phone number of customer support staff, or a link to unsubscribe on your webpage.
++
+### How we handle opt-out requests:
+
+If an individual requests to opt-out of future messages on an Azure Communication Services toll-free number, then all further traffic from that number will be automatically stopped. However, you must still ensure that you do not send additional messages for that messaging campaign from new or different numbers. If you have separately obtained express consent for a different messaging campaign, then you may continue to send messages from a different number for that campaign. Check out our FAQ section to learn more on [Opt-out handling](https://github.com/Prakulk#how-can-i-receive-messages-using-azure-communication-services)
+
+## Message content
+
+### Adult content:
+
+Message content that includes elements of sex, hate, alcohol, firearms, tobacco, gambling, or sweepstakes and contests can trigger additional requirements. This content is expressly prohibited in some jurisdictions. If you send a message that includes this content, then it is your duty to abide by all applicable laws of the jurisdictions in which the communications are received. At the request of law enforcement or Azure Communication Services, you must be prepared to provide proof of consent with local laws that regulate adult content.
+
+Even where such content is not unlawful, you should include an age verification mechanism at opt-in to age-gate the intended message recipient from adult content. In the United States, additional legal requirements apply to marketing communications directed at children under the age of 13.
+
+### Prohibited content:
+
+Azure Communication Services prohibits certain message content regardless of consent. Prohibited content includes:
+- Content that promotes unlawful activities (e.g., tax evasion or animal cruelty in the United States)
+- Hate speech, defamatory speech, harassment, or other speech determined to be patently offensive
+- Pornographic content
+- Obscene or vulgar content
+- Intimidation and threats
+- Content that intends to defraud, deceive, cause harm, or wrongfully obtain anything of value
+- Content that incites harm, discrimination, or violence
+- Content that spreads malware
+- Content that intends to evade age-gating requirements
+
+We reserve the right to modify the list of prohibited message content at any time.
+
+## Spoofing
+
+Spoofing is the act of causing a misleading or inaccurate originating number to display on a message recipientΓÇÖs device. We strongly discourage you and any service provider that you use from sending spoofed messages. Spoofing shields the identity of the message sender and prevents message recipients from easily opting out of unwanted communications. We also require that you abide by all applicable spoofing laws.
+
+## Final thoughts
+
+### Legal Responsibility:
+
+This Messaging Policy does not constitute legal advice, and we reserve the right to modify the policy at any time. Azure Communication Services is not responsible for ensuring that the content, timing, or recipients of our customersΓÇÖ messages meet all applicable legal requirements.
+
+Our customers are responsible for all messaging requirements. If you are a platform or software provider that uses Azure Communication Services for messaging purposes, then you should require that your customers also abide by all of the requirements discussed in this Messaging Policy. For further guidance, the CTIA provides helpful [Messaging Principles and Best Practices](https://api.ctia.org/wp-content/uploads/2019/07/190719-CTIA-Messaging-Principles-and-Best-Practices-FINAL.pdf).
+
+### Penalties:
+
+We encourage our customers to develop and implement policies and procedures designed to ensure compliance with all messaging requirements. Violations of messaging requirements may lead to substantial fines that can balloon quickly. It's in your best interest to learn and abide by all applicable messaging requirements and develop effective mitigation safeguards to contain and eliminate violations before they spread. If you do breach our Messaging Policy or other legal requirements, then we'll work with you to ensure future compliance. However, we reserve the right to remove any customer from the Azure Communication Services platform who demonstrates a pattern of noncompliance with our Messaging Policy or legal requirements.
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sdk-features.md
description: Provides an overview of the SMS SDK and its offerings.
-- Previously updated : 03/10/2021+ Last updated : 03/26/2021
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] - [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] Azure Communication Services SMS SDKs can be used to add SMS messaging to your applications.
The following list presents the set of features which are currently available in
| Group of features | Capability | JS | Java | .NET | Python | | -- | - | | - | - | |
-| Core Capabilities | Send and receive SMS messages </br> *Unicode emojis supported* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Receive Delivery Reports for messages sent | ✔️ | ✔️ | ✔️ | ✔️ |
+| Core Capabilities | Send and receive SMS messages | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Enable Delivery Reports for messages sent | ✔️ | ✔️ | ✔️ | ✔️ |
| | All character sets (language/unicode support) | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Support for long messages (up to 2048 char) | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Support for long messages (up to 2048 bytes) | ✔️ | ✔️ | ✔️ | ✔️ |
| | Auto-concatenation of long messages | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Send messages to multiple recipients at a time | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Support for idempotency | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Custom tags for messages. | ✔️ | ✔️ | ✔️ | ✔️ |
| Events | Use Event Grid to configure webhooks to receive inbound messages and delivery reports | ✔️ | ✔️ | ✔️ | ✔️ | | Phone Number | Toll-Free numbers | ✔️ | ✔️ | ✔️ | ✔️ |
-| Regulatory | Opt-Out Handling | ✔️ | ✔️ | ✔️ | ✔️ |
-| Monitoring | Monitor usage for messages sent and received | ✔️ | ✔️ | ✔️ | ✔️ |
| PSTN Calling | Add PSTN calling capabilities to your SMS-enabled toll-free number | ✔️ | ✔️ | ✔️ | ✔️ | ## Next steps
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sms-faq.md
+
+ Title: SMS FAQ
+
+description: SMS FAQ
+++++ Last updated : 03/26/2021++++
+# SMS FAQ
+
+## Can a customer use Azure Communication Services for emergency purposes?
+
+Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
+
+## Are there any limits on sending messages?
+
+To ensure that we continue offering the high quality of service consistent with our SLAs, Azure Communication Services applies rate limits (different for each primitive). Developers who call our APIs beyond the limit will receive a 429 HTTP Status Code Response. If your company has requirements that exceed the rate-limits, please email us at phone@microsoft.com.
+
+Rate Limits for SMS:
+
+|Operation|Scope|Timeframe (s)| Limit (request #) | Message units per minute|
+||--|-|-|-|
+|Send Message|Per Number|60|200|200|
+
+## How does Azure Communication Services handle opt-outs for Toll-free numbers?
+
+Opt-outs for US toll-free numbers are mandated and enforced by US carriers.
+- STOP - If a text message recipient wishes to opt-out, they can send ΓÇÿSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for STOP: "NETWORK MSG: You replied with the word "stop" which blocks all texts sent from this number. Text back "unstop" to receive messages again."
+- START/UNSTOP - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOP to the toll-free number. The carrier sends the following default response for START/UNSTOP: ΓÇ£NETWORK MSG: You have replied ΓÇ£unstopΓÇ¥ and will begin receiving messages again from this number.ΓÇ¥
+- Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥
+- The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
+
+## How can I receive messages using Azure Communication Services?
+
+Azure Communication Services customers can use Azure Event Grid to receive incoming messages. Follow this [quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/telephony-sms/handle-sms-events) to setup your event-grid to receive messages.
+
+## Can I send/receive long messages (>2048 chars)?
+
+Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages.
+
+## How are messages sent to landline numbers treated?
+
+In the United States, Azure Communication Services does not check for landline numbers and will attempt to send it to carriers for delivery. Customers will be charged for messages sent to landline numbers.
+
+## Can I send messages to multiple recipients?
++
+Yes, you can make one request with multiple recipients. Follow this [quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/telephony-sms/send?pivots=programming-language-csharp) to send messages to multiple recipients.
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
Last updated 03/10/2021
-zone_pivot_groups: acs-plat-azp-net
+zone_pivot_groups: acs-plat-azp-azcli-net-ps
# Quickstart: Create and manage Communication Services resources
zone_pivot_groups: acs-plat-azp-net
Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management SDK. The management SDK and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the SDKs is available in the Azure portal.
-Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management SDK. The management SDK and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the SDKs is available in the Azure portal.
- > [!WARNING] > Note that while Communication Services is available in multiple geographies, in order to get a phone number the resource must have a data location set to 'US'. Also note that communication resources cannot be transferred to a different subscription during public preview.
Get started with Azure Communication Services by provisioning your first Communi
[!INCLUDE [.NET](./includes/create-resource-net.md)] ::: zone-end ++ ## Access your connection strings and service endpoints Connection strings allow the Communication Services SDKs to connect and authenticate to Azure. You can access your Communication Services connection strings and service endpoints from the Azure portal or programmatically with Azure Resource Manager APIs.
After navigating to your Communication Services resource, select **Keys** from t
You can also access key information using Azure CLI, like your resource group or the keys for a specific resource.
-Install [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You will need to provide your credentials to connect with your azure account.
+Install [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You will need to provide your credentials to connect with your Azure account.
```azurecli az login ```
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
Title: Quickstart - Handle SMS events
+ Title: Quickstart - Handle SMS events for Delivery Reports and Inbound Messages
description: Learn how to handle SMS events using Azure Communication Services.
Last updated 03/10/2021
-# Quickstart: Handle SMS events
+# Quickstart: Handle SMS events for Delivery Reports and Inbound Messages
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
In this quickstart, you learned how to consume SMS events. You can receive SMS m
You may also want to: + - [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md) - [Learn about Event Grid](../../../event-grid/overview.md)
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
If you want to clean up and remove a Communication Services subscription, you ca
In this quickstart, you learned how to send SMS messages using Azure Communication Services. > [!div class="nextstepaction"]
-> [Subscribe to SMS Events](./handle-sms-events.md)
+> [Receive SMS and Delivery Report Events](./handle-sms-events.md)
> [!div class="nextstepaction"] > [Phone number types](../../concepts/telephony-sms/plan-solution.md)
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-resource-model.md
This resource contains a database account instance that can be restored. The dat
| restorableLocations: creationTime | The time in UTC when the regional account was created.| | restorableLocations: deletionTime | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
-To get a list of all restorable accounts, see [Restorable Database Accounts - list](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorabledatabaseaccounts/list) or [Restorable Database Accounts- list by location](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorabledatabaseaccounts/listbylocation) articles.
+To get a list of all restorable accounts, see [Restorable Database Accounts - list](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorabledatabaseaccounts/list) or [Restorable Database Accounts- list by location](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorabledatabaseaccounts/listbylocation) articles.
### Restorable SQL database
Each resource contains information of a mutation event such as creation and dele
| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event is not initiated by the user</li></ul> | | database |The properties of the SQL database at the time of the event|
-To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorablesqldatabases/list) article.
+To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablesqldatabases/list) article.
### Restorable SQL container
Each resource contains information of a mutation event such as creation and dele
| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event is not initiated by the user</li></ul> | | container | The properties of the SQL container at the time of the event.|
-To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorablesqlcontainers/list) article.
+To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablesqlcontainers/list) article.
### Restorable SQL resources
Each resource represents a single database and all the containers under that dat
| databaseName | The name of the SQL database. | collectionNames | The list of SQL containers under this database.|
-To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorablesqlresources/list) article.
+To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablesqlresources/list) article.
### Restorable MongoDB database
Each resource contains information of a mutation event such as creation and dele
| ownerResourceId | The resource ID of the MongoDB database. | | operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
-To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorablemongodbdatabases/list) article.
+To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablemongodbdatabases/list) article.
### Restorable MongoDB collection
Each resource contains information of a mutation event such as creation and dele
| ownerResourceId | The resource ID of the MongoDB collection. | | operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user</li></ul> |
-To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorablemongodbcollections/list) article.
+To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablemongodbcollections/list) article.
### Restorable MongoDB resources
Each resource represents a single database and all the collections under that da
| databaseName |The name of the MongoDB database. | | collectionNames | The list of MongoDB collections under this database. |
-To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/restorablemongodbresources/list) article.
+To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablemongodbresources/list) article.
## Next steps
cosmos-db Create Real Time Weather Dashboard Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-real-time-weather-dashboard-powerbi.md
Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dat
|Property |Data type |Filter | ||||
- |_ts | Numeric | [_ts] > Duration.TotalSeconds(RangeStart - #datetime(1970, 1, 1, 0, 0, 0)) and [_ts] < Duration.TotalSeconds(RangeEnd - #datetime(1970, 1, 1, 0, 0, 0))) |
+ |_ts | Numeric | [_ts] > Duration.TotalSeconds(RangeStart - #datetime(1970, 1, 1, 0, 0, 0)) and [_ts] < Duration.TotalSeconds(RangeEnd - #datetime(1970, 1, 1, 0, 0, 0))) |
|Date (for example:- 2019-08-19) | String | [Document.date]> DateTime.ToText(RangeStart,"yyyy-MM-dd") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-MM-dd") | |Date (for example:- 2019-08-11 12:00:00) | String | [Document.date]> DateTime.ToText(RangeStart," yyyy-mm-dd HH:mm:ss") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-mm-dd HH:mm:ss") |
Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dat
1. **Create a new Azure Analysis Services cluster** - [Create an instance of Azure Analysis services](../analysis-services/analysis-services-create-server.md) in the same region as the Azure Cosmos account and the Databricks cluster.
-1. **Create a new Analysis Services Tabular Project in Visual Studio** - [Install the SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-2017&preserve-view=true) and create an Analysis Services Tabular project in Visual Studio.
+1. **Create a new Analysis Services Tabular Project in Visual Studio** - [Install the SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt) and create an Analysis Services Tabular project in Visual Studio.
:::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/create-analysis-services-project.png" alt-text="Create Azure Analysis Services project":::
Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dat
Additionally, change the data type of the temperature columns to Decimal to make sure that these values can be plotted in Power BI.
-1. **Create Azure Analysis partitions** - Create partitions in Azure Analysis Services to divide the dataset into logical partitions that can be refreshed independently and at different frequencies. In this example, you create two partitions that would divide the dataset into the most recent monthΓÇÖs data and everything else.
+1. **Create Azure Analysis partitions** - Create partitions in Azure Analysis Services to divide the dataset into logical partitions that can be refreshed independently and at different frequencies. In this example, you create two partitions that would divide the dataset into the most recent month's data and everything else.
:::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/create-analysis-services-partitions.png" alt-text="Create analysis services partitions":::
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/assign-access-acm-data.md
To view cost data for Azure EA subscriptions, a user must have at least read acc
| Billing account<sup>1</sup> | [https://ea.azure.com](https://ea.azure.com/) | Enterprise Admin | None | All subscriptions from the enterprise agreement | | Department | [https://ea.azure.com](https://ea.azure.com/) | Department Admin | **DA view charges** enabled | All subscriptions belonging to an enrollment account that is linked to the department | | Enrollment account<sup>2</sup> | [https://ea.azure.com](https://ea.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
-| Management group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Reader) | **AO view charges** enabled | All subscriptions below the management group |
-| Subscription | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Reader) | **AO view charges** enabled | All resources/resource groups in the subscription |
-| Resource group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Reader) | **AO view charges** enabled | All resources in the resource group |
+| Management group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All subscriptions below the management group |
+| Subscription | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources/resource groups in the subscription |
+| Resource group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources in the resource group |
<sup>1</sup> The billing account is also referred to as the Enterprise Agreement or Enrollment.
To view cost data for Azure EA subscriptions, a user must have at least read acc
To view cost data for other Azure subscriptions, a user must have at least read access to one or more of the following scopes: -- Azure account - Management group
+- Subscription
- Resource group Various scopes are available after partners onboard customers to a Microsoft Customer Agreement. CSP customers can then use Cost Management features when enabled by their CSP partner. For more information, see [Get started with Azure Cost Management for partners](get-started-partners.md).
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
This article explores common troubleshooting methods for mapping data flows in A
### Error code: DF-Excel-InvalidFile - **Message**: Invalid excel file is provided while only .xlsx and .xls are supported.
-### Error code: DF-AdobeIntegration-InvalidMapToFilter
-- **Message**: Custom resource can only have one Key/Id mapped to filter.-
-### Error code: DF-AdobeIntegration-InvalidPartitionConfiguration
-- **Message**: Only single partition is supported. Partition schema may be RoundRobin or Hash.-- **Recommendation**: In AdobeIntegration settings, confirm you only have single partitions. The partition schema may be RoundRobin or Hash.-
-### Error code: DF-AdobeIntegration-KeyColumnMissed
-- **Message**: Key must be specified for non-insertable operations.-- **Recommendation**: Specify your key columns in AdobeIntegration settings for non-insertable operations.-
-### Error code: DF-AdobeIntegration-InvalidPartitionType
-- **Message**: Partition type has to be roundRobin.-- **Recommendation**: Confirm the partition type is roundRobin in AdobeIntegration settings.-
-### Error code: DF-AdobeIntegration-InvalidPrivacyRegulation
-- **Message**: Only privacy regulation supported currently is gdpr.-- **Recommendation**: Confirm the privacy regulation in AdobeIntegration settings is **'GDPR'**. ## Miscellaneous troubleshooting tips - **Issue**: Unexpected exception occurred and execution failed.
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
Create or add diagnostic settings for your data factory.
![Name your settings and select a log-analytics workspace](media/data-factory-monitor-oms/monitor-oms-image2.png) > [!NOTE]
- > Because an Azure log table can't have more than 500 columns, we **highly recommended** you select _Resource-Specific mode_. For more information, see [AzureDiagnostics Logs reference](/azure-monitor/reference/tables/azurediagnostics#additionalfields-column).
+ > Because an Azure log table can't have more than 500 columns, we **highly recommended** you select _Resource-Specific mode_. For more information, see [AzureDiagnostics Logs reference](/azure/azure-monitor/reference/tables/azurediagnostics).
1. Select **Save**.
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-dot-net.md
ms.devlang: dotnet Previously updated : 03/16/2021 Last updated : 03/27/2021 # Quickstart: Create a data factory and pipeline using .NET SDK
Next, create a C# .NET console application in Visual Studio:
string blobDatasetName = "BlobDataset"; string pipelineName = "Adfv2QuickStartPipeline"; ```
+> [!NOTE]
+> For US Azure Gov accounts, you have to use BaseUri of *https://management.usgovcloudapi.net* instead of *https://management.azure.com/*, and then create data factory management client.
+>
3. Add the following code to the **Main** method that creates an instance of **DataFactoryManagementClient** class. You use this object to create a data factory, a linked service, datasets, and a pipeline. You also use this object to monitor the pipeline run details.
Next, create a C# .NET console application in Visual Studio:
SubscriptionId = subscriptionId }; ``` + ## Create a data factory Add the following code to the **Main** method that creates a **data factory**.
data-lake-analytics Understand Spark Code Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/understand-spark-code-concepts.md
For more information, see:
- [org.apache.spark.sql.types](https://spark.apache.org/docs/latest/api/scala/https://docsupdatetracker.net/index.html#org.apache.spark.sql.types.package) - [Spark SQL and DataFrames Types](https://spark.apache.org/docs/latest/sql-ref-datatypes.html) - [Scala value types](https://www.scala-lang.org/api/current/scala/AnyVal.html)-- [pyspark.sql.types](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.types)
+- [pyspark.sql.types](https://spark.apache.org/docs/2.3.1/api/python/_modules/pyspark/sql/types.html#module-pyspark.sql.types)
### Treatment of NULL
Spark's cost-based query optimizer has its own capabilities to provide hints and
- [Upgrade your big data analytics solutions from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md) - [Transform data using Spark activity in Azure Data Factory](../data-factory/transform-data-using-spark.md) - [Transform data using Hadoop Hive activity in Azure Data Factory](../data-factory/transform-data-using-hadoop-hive.md)-- [What is Apache Spark in Azure HDInsight](../hdinsight/spark/apache-spark-overview.md)
+- [What is Apache Spark in Azure HDInsight](../hdinsight/spark/apache-spark-overview.md)
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/concepts-roles-permissions.md
Previously updated : 10/15/2020 Last updated : 03/24/2021 # Roles and requirements for Azure Data Share
This article describes roles and permissions required to share and receive data
## Roles and requirements
-With Azure Data Share service, you can share data without exchanging credentials between data provider and consumer. Azure Data Share service uses Managed Identities (previously known as MSIs) to authenticate to Azure data store.
+With Azure Data Share service, you can share data without exchanging credentials between data provider and consumer. For snapshot-based sharing, Azure Data Share service uses Managed Identities (previously known as MSIs) to authenticate to Azure data store. Azure Data Share resource's managed identity needs to be granted access to Azure data store to read or write data.
-Azure Data Share resource's managed identity needs to be granted access to Azure data store. Azure Data Share service then uses this managed identity to read and write data for snapshot-based sharing, and to establish symbolic link for in-place sharing.
-
-To share or receive data from an Azure data store, user needs at least the following permissions. Additional permissions are required for SQL-based sharing.
+To share or receive data from an Azure data store, user needs at least the following permissions.
* Permission to write to the Azure data store. Typically, this permission exists in the **Contributor** role.
-* Permission to create role assignment in the Azure data store. Typically, permission to create role assignments exists in the **Owner** role, User Access Administrator role, or a custom role with Microsoft.Authorization/role assignments/write permission assigned. This permission is not required if the data share resource's managed identity is already granted access to the Azure data store. See table below for required role.
-Below is a summary of the roles assigned to Data Share resource's managed identity:
+For storage and data lake snapshot-based sharing, you also need permission to create role assignment in the Azure data store. Typically, permission to create role assignments exists in the **Owner** role, User Access Administrator role, or a custom role with *Microsoft.Authorization/role assignments/write* permission assigned. This permission is not required if the data share resource's managed identity is already granted access to the Azure data store. Below is a summary of the roles assigned to Data Share resource's managed identity:
|**Data Store Type**|**Data Provider Source Data Store**|**Data Consumer Target Data Store**| |||| |Azure Blob Storage| Storage Blob Data Reader | Storage Blob Data Contributor |Azure Data Lake Gen1 | Owner | Not Supported |Azure Data Lake Gen2 | Storage Blob Data Reader | Storage Blob Data Contributor
-|Azure Data Explorer Cluster | Contributor | Contributor
|
-For SQL-based sharing, a SQL user needs to be created from an external provider in Azure SQL Database with the same name as the Azure Data Share resource. Azure Active Directory admin permission is required to create this user. Below is a summary of the permission required by the SQL user.
+For SQL snapshot-based sharing, a SQL user needs to be created from an external provider in Azure SQL Database with the same name as the Azure Data Share resource. Azure Active Directory admin permission is required to create this user. Below is a summary of the permission required by the SQL user.
|**SQL Database Type**|**Data Provider SQL User Permission**|**Data Consumer SQL User Permission**| ||||
For SQL-based sharing, a SQL user needs to be created from an external provider
| ### Data provider
+For storage and data lake snapshot-based sharing, to add a dataset in Azure Data Share, provider data share resource's managed identity needs to be granted access to the source Azure data store. For example, in the case of storage account, the data share resource's managed identity is granted the *Storage Blob Data Reader* role. This is done automatically by the Azure Data Share service when user is adding dataset via Azure portal and the user has the proper permission. For example, user is an owner of the Azure data store, or is a member of a custom role that has the *Microsoft.Authorization/role assignments/write* permission assigned.
-To add a dataset in Azure Data Share, provider data share resource's managed identity needs to be granted access to the source Azure data store. For example, in the case of storage account, the data share resource's managed identity is granted the Storage Blob Data Reader role.
-
-This is done automatically by the Azure Data Share service when user is adding dataset via Azure portal and the user has the proper permission. For example, user is an owner of the Azure data store, or is a member of a custom role that has the Microsoft.Authorization/role assignments/write permission assigned.
-
-Alternatively, user can have owner of the Azure data store add the data share resource's managed identity to the Azure data store manually. This action only needs to be performed once per data share resource.
-
-To create a role assignment for the data share resource's managed identity manually, follow the below steps.
+Alternatively, user can have owner of the Azure data store add the data share resource's managed identity to the Azure data store manually. This action only needs to be performed once per data share resource. To create a role assignment for the data share resource's managed identity manually, follow the below steps.
1. Navigate to the Azure data store. 1. Select **Access Control (IAM)**.
To create a role assignment for the data share resource's managed identity manua
To learn more about role assignment, refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you are sharing data using REST APIs, you can create role assignment using API by referencing [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md).
-For SQL-based sources, a SQL user needs to be created from an external provider in SQL Database with the same name as the Azure Data Share resource while connecting to SQL database using Azure Active Directory authentication. This user needs to be granted *db_datareader* permission. A sample script along with other prerequisites for SQL-based sharing can be found in the [Share from Azure SQL Database or Azure Synapse Analytics](how-to-share-from-sql.md) tutorial.
+For SQL snapshot-based sharing, a SQL user needs to be created from an external provider in SQL Database with the same name as the Azure Data Share resource while connecting to SQL database using Azure Active Directory authentication. This user needs to be granted *db_datareader* permission. A sample script along with other prerequisites for SQL-based sharing can be found in the [Share from Azure SQL Database or Azure Synapse Analytics](how-to-share-from-sql.md) tutorial.
### Data consumer
-To receive data, consumer data share resource's managed identity needs to be granted access to the target Azure data store. For example, in the case of storage account, the data share resource's managed identity is granted the Storage Blob Data Contributor role.
-
-This is done automatically by the Azure Data Share service if the user specifies a target data store via Azure portal and the user has proper permission. For example, user is an owner of the Azure data store, or is a member of a custom role which has the Microsoft.Authorization/role assignments/write permission assigned.
-
-Alternatively, user can have owner of the Azure data store add the data share resource's managed identity to the Azure data store manually. This action only needs to be performed once per data share resource.
+To receive data into storage account, consumer data share resource's managed identity needs to be granted access to the target storage account. The data share resource's managed identity needs to be granted the *Storage Blob Data Contributor* role. This is done automatically by the Azure Data Share service if the user specifies a target storage account via Azure portal and the user has proper permission. For example, user is an owner of the storage account, or is a member of a custom role which has the *Microsoft.Authorization/role assignments/write* permission assigned.
-To create a role assignment for the data share resource's managed identity manually, follow the below steps.
+Alternatively, user can have owner of the storage account add the data share resource's managed identity to the storage account manually. This action only needs to be performed once per data share resource. To create a role assignment for the data share resource's managed identity manually, follow the below steps.
1. Navigate to the Azure data store. 1. Select **Access Control (IAM)**.
data-share How To Share From Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-sql.md
When data is received into SQL table and if the target table does not already ex
Below is the list of prerequisites for sharing data from SQL source. #### Prerequisites for sharing from Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
-You can follow the [step by step demo](https://youtu.be/hIE-TjJD8Dc) to configure prerequisites.
++
+To share data using Azure Active Directory authentication, here is a list of prerequisites:
+
+* An Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) with tables and views that you want to share.
+* Permission to write to the databases on SQL server, which is present in *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
+* SQL Server **Azure Active Directory Admin**
+* SQL Server Firewall access. This can be done through the following steps:
+ 1. In Azure portal, navigate to SQL server. Select *Firewalls and virtual networks* from left navigation.
+ 1. Click **Yes** for *Allow Azure services and resources to access this server*.
+ 1. Click **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you are sharing SQL data from Azure portal. You can also add an IP range.
+ 1. Click **Save**.
+
+To share data using SQL authentication, below is a list of prerequisites. You can follow the [step by step demo](https://youtu.be/hIE-TjJD8Dc) to configure prerequisites.
* An Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) with tables and views that you want to share. * Permission to write to the databases on SQL server, which is present in *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
Create an Azure Data Share resource in an Azure resource group.
![AddDatasets](./media/add-datasets.png "Add Datasets")
-1. Select your SQL server or Synapse workspace, provide credentials if prompted and select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
+1. Select your SQL server or Synapse workspace. If you are using AAD authentication and the checkbox **Allow Data Share to run the above 'create user' SQL script on my behalf** appears, check the checkbox. If you are using SQL authentication, provide credentials, and follow the steps in the prerequisites to run the script appear in the screen. This gives Data Share resource permission to read from your SQL DB.
+
+ Select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
![SelectDatasets](./media/select-datasets-sql.png "Select Datasets")
If you choose to receive data into Azure Storage, below is the list of prerequis
If you choose to receive data into Azure SQL Database, Azure Synapse Analytics, below is the list of prerequisites. #### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
-You can follow the [step by step demo](https://youtu.be/aeGISgK1xro) to configure prerequisites.
+
+To receive data into a SQL server where you are the **Azure Active Directory admin** of the SQL server, here is a list of prerequisites:
+
+* An Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW).
+* Permission to write to the databases on SQL server, which is present in *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
+* SQL Server Firewall access. This can be done through the following steps:
+ 1. In Azure portal, navigate to SQL server. Select *Firewalls and virtual networks* from left navigation.
+ 1. Click **Yes** for *Allow Azure services and resources to access this server*.
+ 1. Click **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you are sharing SQL data from Azure portal. You can also add an IP range.
+ 1. Click **Save**.
+
+To receive data into a SQL server where you are not the **Azure Active Directory admin**, below is a list of prerequisites. You can follow the [step by step demo](https://youtu.be/aeGISgK1xro) to configure prerequisites.
* An Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW). * Permission to write to databases on the SQL server, which is present in *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
Follow the steps below to configure where you want to receive data.
![Map to target](./media/dataset-map-target.png "Map to target")
-1. Select a target data store that you'd like the data to land in. Any data files or tables in the target data store with the same path and name will be overwritten.
+1. Select a target data store that you'd like the data to land in. Any data files or tables in the target data store with the same path and name will be overwritten. If you are receiving data into SQL target, and the **Allow Data Share to run the above 'create user' SQL script on my behalf** checkbox appears, check the checkbox. Otherwise, follow the instruction in prerequisites to run the script appear on the screen. This will give Data Share resource write permission to your target SQL DB.
![Target storage account](./media/dataset-map-target-sql.png "Target Data Store")
-1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular update to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
+1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular update to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**. Note that the first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
![Enable snapshot schedule](./media/enable-snapshot-schedule.png "Enable snapshot schedule") ### Trigger a snapshot These steps only apply to snapshot-based sharing.
-1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full or incremental snapshot of your data. If it is your first time receiving data from your data provider, select full copy. For SQL sources, only full snapshot is supported. When a snapshot is executing, subsequent snapshots will not start until the previous one complete.
+1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full or incremental snapshot of your data. If it is your first time receiving data from your data provider, select full copy. For SQL sources, only full snapshot is supported. When a snapshot is executing, subsequent snapshots will not start until the previous one complete.
![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
Follow the steps in this section to configure a location to receive data.
![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
-1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**.
+1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**. Note that the first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
![Screenshot showing how to enable a snapshot schedule.](./media/enable-snapshot-schedule.png "Enable snapshot schedule.")
data-share Share Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data.md
Previously updated : 11/12/2020 Last updated : 03/24/2021 # Tutorial: Share data using Azure Data Share
In this tutorial, you'll learn how to:
Below is the list of prerequisites for sharing data from SQL source. #### Prerequisites for sharing from Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
-You can follow the [step by step demo](https://youtu.be/hIE-TjJD8Dc) to configure prerequisites.
* An Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) with tables and views that you want to share. * Permission to write to the databases on SQL server, which is present in *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
-* Permission for the Data Share resource's managed identity to access the database. This can be done through the following steps:
- 1. In Azure portal, navigate to the SQL server and set yourself as the **Azure Active Directory Admin**.
- 1. Connect to the Azure SQL Database/Data Warehouse using [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
- 1. Execute the following script to add the Data Share resource Managed Identity as a db_datareader. You must connect using Active Directory and not SQL Server authentication.
-
- ```sql
- create user "<share_acct_name>" from external provider;
- exec sp_addrolemember db_datareader, "<share_acct_name>";
- ```
- Note that the *<share_acc_name>* is the name of your Data Share resource. If you have not created a Data Share resource as yet, you can come back to this pre-requisite later.
-
-* An Azure SQL Database User with **'db_datareader'** access to navigate and select the tables and/or views you wish to share.
-
+* **Azure Active Directory Admin** of the SQL server
* SQL Server Firewall access. This can be done through the following steps: 1. In Azure portal, navigate to SQL server. Select *Firewalls and virtual networks* from left navigation. 1. Click **Yes** for *Allow Azure services and resources to access this server*.
You can follow the [step by step demo](https://youtu.be/hIE-TjJD8Dc) to configur
### Share from Azure Data Explorer * An Azure Data Explorer cluster with databases you want to share. * Permission to write to Azure Data Explorer cluster, which is present in *Microsoft.Kusto/clusters/write*. This permission exists in the **Contributor** role.
-* Permission to add role assignment to the Azure Data Explorer cluster, which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the **Owner** role.
## Sign in to the Azure portal
Use these commands to create the resource:
![Add Datasets to your share](./media/datasets.png "Datasets")
-1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step. If sharing from an Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you will be prompted for SQL credentials to list tables.
+1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step. If sharing from an Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you will be prompted for authentication method to list tables. Select AAD authentication, and check the checkbox **Allow Data Share to run the above 'create user' script on my behalf**.
![AddDatasets](./media/add-datasets.png "Add Datasets")
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/subscribe-to-data-share.md
Previously updated : 11/12/2020 Last updated : 03/24/2021 # Tutorial: Accept and receive data using Azure Data Share
Ensure that all pre-requisites are complete before accepting a data share invita
If you choose to receive data into Azure SQL Database, Azure Synapse Analytics, below is the list of prerequisites. #### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
-You can follow the [step by step demo](https://youtu.be/aeGISgK1xro) to configure prerequisites.
* An Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW). * Permission to write to databases on the SQL server, which is present in *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
-* Permission for the Data Share resource's managed identity to access the Azure SQL Database or Azure Synapse Analytics. This can be done through the following steps:
- 1. In Azure portal, navigate to the SQL server and set yourself as the **Azure Active Directory Admin**.
- 1. Connect to the Azure SQL Database/Data Warehouse using [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
- 1. Execute the following script to add the Data Share Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'. You must connect using Active Directory and not SQL Server authentication.
-
- ```sql
- create user "<share_acc_name>" from external provider;
- exec sp_addrolemember db_datareader, "<share_acc_name>";
- exec sp_addrolemember db_datawriter, "<share_acc_name>";
- exec sp_addrolemember db_ddladmin, "<share_acc_name>";
- ```
- Note that the *<share_acc_name>* is the name of your Data Share resource. If you have not created a Data Share resource as yet, you can come back to this pre-requisite later.
-
+* **Azure Active Directory Admin** of the SQL server
* SQL Server Firewall access. This can be done through the following steps: 1. In SQL server in Azure portal, navigate to *Firewalls and virtual networks* 1. Click **Yes** for *Allow Azure services and resources to access this server*.
You can follow the [step by step demo](https://youtu.be/aeGISgK1xro) to configur
* An Azure Data Explorer cluster in the same Azure data center as the data provider's Data Explorer cluster: If you don't already have one, you can create an [Azure Data Explorer cluster](/azure/data-explorer/create-cluster-database-portal). If you don't know the Azure data center of the data provider's cluster, you can create the cluster later in the process. * Permission to write to the Azure Data Explorer cluster, which is present in *Microsoft.Kusto/clusters/write*. This permission exists in the Contributor role.
-* Permission to add role assignment to the Azure Data Explorer cluster, which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Owner role.
## Sign in to the Azure portal
Follow the steps below to configure where you want to receive data.
![Map to target](./media/dataset-map-target.png "Map to target")
-1. Select a target data store type that you'd like the data to land in. Any data files or tables in the target data store with the same path and name will be overwritten.
+1. Select a target data store type that you'd like the data to land in. Any data files or tables in the target data store with the same path and name will be overwritten. If you are receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), check the checkbox **Allow Data Share to run the above 'create user' script on my behalf**.
For in-place sharing, select a data store in the Location specified. The Location is the Azure data center where data provider's source data store is located at. Once dataset is mapped, you can follow the link in the Target Path to access the data. ![Target storage account](./media/dataset-map-target-sql.png "Target storage")
-1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular update to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
+1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular update to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**. Note that the first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
![Enable snapshot schedule](./media/enable-snapshot-schedule.png "Enable snapshot schedule")
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-cli-python.md
A Python script is provided to you to create a VM. Depending on whether you are
## Next steps
-[Common Az CLI commands for Linux virtual machines](../virtual-machines/linux/cli-manage.md)
+[Common Az CLI commands for Linux virtual machines](../virtual-machines/linux/cli-manage.md)
databox-online Azure Stack Edge Gpu Sharing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-sharing.md
# GPU sharing on your Azure Stack Edge Pro GPU device
-Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two Nvidia Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [Nvidia's Tesla T4 GPU](https://www.nvidia.com/data-center/tesla-t4/).
+Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two Nvidia Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [Nvidia's Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/).
## About GPU sharing
databox-online Azure Stack Edge Gpu Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot.md
Here are the errors related to blob storage on Azure Stack Edge Pro/ Data Box Ga
## Next steps -- Learn more on how to [Troubleshoot device activation issues](azure-stack-edge-gpu-troubleshoot-activation.md).
+- Learn more on how to [Troubleshoot device activation issues](azure-stack-edge-gpu-troubleshoot-activation.md).
databox-online Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/security-baseline.md
Note that additional permissions might be required to get visibility into worklo
**Guidance**: Only authorized users, for example, the 'EdgeArmUser' can access the Azure Stack Edge device APIs via the local Azure Resource Manager. User account passwords can only be managed at the Azure portal. -- [Set Azure Resource Manager password](/azure/azure-stack-edge-gpu-set-azure-resource-manager-password)
+- [Set Azure Resource Manager password](/azure/databox-online/azure-stack-edge-gpu-set-azure-resource-manager-password)
**Azure Security Center monitoring**: Currently not available
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
This approach is more secure because the lab user authenticates directly to the
1. `{gateway-hostname}` is the gateway hostname specified on the **Lab Settings** page for your lab in the Azure portal. 1. `{lab-machine-name}` is the name of the machine that you're trying to connect. 1. `{port-number}` is the port on which the connection needs to be made. Usually this port is 3389. If the lab VM is using the [shared IP](devtest-lab-shared-ip.md) feature in DevTest Labs, the port will be different.
-1. The remote desktop gateway defers the call from `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to an Azure function to generate the authentication token. The DevTest Labs service automatically includes the function key in the request header. The function key is to be saved in the labΓÇÖs key vault. The name for that secret to be shown as **Gateway token secret** on the **Lab Settings** page for the lab.
+1. The remote desktop gateway defers the call from `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to an Azure function to generate the authentication token. The DevTest Labs service automatically includes the function key in the request header. The function key is to be saved in the lab's key vault. The name for that secret to be shown as **Gateway token secret** on the **Lab Settings** page for the lab.
1. The Azure function is expected to return a token for certificate-based token authentication against the gateway machine. 1. The Get RDP file contents action then returns the complete RDP file, including the authentication information. 1. You open the RDP file using your preferred RDP connection program. Remember that not all RDP connection programs support token authentication. The authentication token does have an expiration date, set by the function app. Make the connection to the lab VM before the token expires.
To work with the DevTest Labs token authentication feature, there are a few conf
### Requirements for remote desktop gateway machines - TLS/SSL certificate must be installed on the gateway machine to handle HTTPS traffic. The certificate must match the fully qualified domain name (FQDN) of the load balancer for the gateway farm or the FQDN of the machine itself if there's only one machine. Wild-card TLS/SSL certificates don't work. - A signing certificate installed on gateway machine(s). Create a signing certificate by using [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1) script.-- Install the [Pluggable Authentication](https://code.msdn.microsoft.com/windowsdesktop/Remote-Desktop-Gateway-517d6273) module that supports token authentication for the remote desktop gateway. One example of such a module is `RDGatewayFedAuth.msi` that comes with [System Center Virtual Machine Manager (VMM) images](/system-center/vmm/install-console?view=sc-vmm-1807). For more information about System Center, see [System Center documentation](/system-center/) and [pricing details](https://www.microsoft.com/cloud-platform/system-center-pricing).
+- Install the [Pluggable Authentication](https://code.msdn.microsoft.com/windowsdesktop/Remote-Desktop-Gateway-517d6273) module that supports token authentication for the remote desktop gateway. One example of such a module is `RDGatewayFedAuth.msi` that comes with [System Center Virtual Machine Manager (VMM) images](/system-center/vmm/install-console?view=sc-vmm-1807&preserve-view=true). For more information about System Center, see [System Center documentation](/system-center/) and [pricing details](https://www.microsoft.com/cloud-platform/system-center-pricing).
- The gateway server can handle requests made to `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}`. The gateway-hostname is the FQDN of the load balancer of the gateway farm or the FQDN of machine itself if there's only one machine. The `{lab-machine-name}` is the name of the lab machine that you're trying to connect, and the `{port-number}` is port on which the connection will be made. By default, this port is 3389. However, if the virtual machine is using the [shared IP](devtest-lab-shared-ip.md) feature in DevTest Labs, the port will be different.
Azure function handles request with format of `https://{function-app-uri}/app/ho
## Configure the lab to use token authentication This section shows how to configure a lab to use a remote desktop gateway machine that supports token authentication. This section doesn't cover how to set up a remote desktop gateway farm itself. For that information, See the [Sample to create a remote desktop gateway](#sample-to-create-a-remote-desktop-gateway) section at the end of this article.
-Before you update the lab settings, store the key needed to successfully execute the function to return an authentication token in the labΓÇÖs key vault. You can get the function key value in the **Manage** page for the function in the Azure portal. For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Save the name of the secret for later use.
+Before you update the lab settings, store the key needed to successfully execute the function to return an authentication token in the lab's key vault. You can get the function key value in the **Manage** page for the function in the Azure portal. For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Save the name of the secret for later use.
-To find the ID of the labΓÇÖs key vault, run the following Azure CLI command:
+To find the ID of the lab's key vault, run the following Azure CLI command:
```azurecli az resource show --name {lab-name} --resource-type 'Microsoft.DevTestLab/labs' --resource-group {lab-resource-group-name} --query properties.vaultName
Configure the lab to use the token authentication by using these steps:
1. In the **Remote desktop** section, enter the fully qualified domain name (FQDN) or IP address of the remote desktop services gateway machine or farm for the **Gateway hostname** field. This value must match the FQDN of the TLS/SSL certificate used on gateway machines. ![Remote desktop options in lab settings](./media/configure-lab-remote-desktop-gateway/remote-desktop-options-in-lab-settings.png)
-1. In the **Remote desktop** section, for **Gateway token** secret, enter the name of the secret created earlier. This value isn't the function key itself, but the name of the secret in the labΓÇÖs key vault that holds the function key.
+1. In the **Remote desktop** section, for **Gateway token** secret, enter the name of the secret created earlier. This value isn't the function key itself, but the name of the secret in the lab's key vault that holds the function key.
![Gateway token secret in lab settings](./media/configure-lab-remote-desktop-gateway/gateway-token-secret.png) 1. **Save** Changes. > [!NOTE]
- > By clicking **Save**, you agree to [Remote Desktop GatewayΓÇÖs license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
+ > By clicking **Save**, you agree to [Remote Desktop Gateway's license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
If configuring the lab via automation is preferred, see [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) for a sample PowerShell script to set **gateway hostname** and **gateway token secret** settings. The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) also provides an Azure Resource Manager template that creates or updates a lab with the **gateway hostname** and **gateway token secret** settings.
Here is an example NSG that only allows traffic that first goes through the gate
## Sample to create a remote desktop gateway > [!NOTE]
-> By using the sample templates, you agree to [Remote Desktop GatewayΓÇÖs license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
+> By using the sample templates, you agree to [Remote Desktop Gateway's license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) provides a few samples to help setup the resources needed to use token authentication and remote desktop gateway with DevTest Labs. These samples include Azure Resource Manager templates for gateway machines, lab settings, and function app.
Follow these steps to set up a sample solution for the remote desktop gateway fa
```powershell $cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate;
- $cer.Import(ΓÇÿpath-to-certificateΓÇÖ);
+ $cer.Import('path-to-certificate');
$hash = $cer.GetCertHashString() ```
- To get the Base64 encoding using PowerShell, use the following command.
+ To get the Base64 encoding using PowerShell, use the following command.
```powershell
- [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes(ΓÇÿpath-to-certificateΓÇÖ))
+ [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes('path-to-certificate'))
``` 3. Download files from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway).
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/dms-tools-matrix.md
The following tables identify the services and tools that you can use to plan su
| Source | Target | Discover /<br/>Inventory | Target and SKU<br/>recommendation | TCO/ROI and<br/>Business case | | | | | | |
-| SQL Server | Azure SQL DB | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
- SQL Server | Azure SQL DB MI | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| SQL Server | Azure SQL DB | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+ SQL Server | Azure SQL DB MI | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
| SQL Server | Azure SQL VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | SQL Server | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| RDS SQL | Azure SQL DB, MI, VM | | [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| Oracle | Azure SQL DB, MI, VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[MigVisor*](https://www.migvisor.com/) | |
-| Oracle | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
+| RDS SQL | Azure SQL DB, MI, VM | | [DMA](/sql/dma/dma-overview) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Oracle | Azure SQL DB, MI, VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[MigVisor*](https://www.migvisor.com/) | |
+| Oracle | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
| Oracle | Azure DB for PostgreSQL -<br/>Single server | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | | | MongoDB | Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) | | | Cassandra | Cosmos DB | | | |
-| MySQL | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| MySQL | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
| MySQL | Azure DB for MySQL | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | RDS MySQL | Azure DB for MySQL | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) | | RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| DB2 | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
-| Access | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
+| DB2 | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Access | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Sybase - SAP ASE | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
| Sybase - SAP IQ | Azure SQL DB, MI, VM | | | | | | | | | |
The following tables identify the services and tools that you can use to plan su
| Source | Target | App Data Access<br/>Layer Assessment | Database<br/>Assessment | Performance<br/>Assessment | | | | | | |
-| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
| SQL Server | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) | | |
-| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
-| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
-| Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
+| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
+| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
| Oracle | Azure DB for PostgreSQL -<br/>Single server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | | | MongoDB | Cosmos DB | | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) | | Cassandra | Cosmos DB | | | |
-| MySQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | |
+| MySQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | |
| MySQL | Azure DB for MySQL | | | | | RDS MySQL | Azure DB for MySQL | | | | | PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | | | RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | |
-| DB2 | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
-| Access | Azure SQL DB, MI, VM | | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | |
+| DB2 | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Access | Azure SQL DB, MI, VM | | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Sybase - SAP ASE | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
| Sybase - SAP IQ | Azure SQL DB, MI, VM | | | | | | | | | |
The following tables identify the services and tools that you can use to plan su
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| SQL Server | Azure SQL DB MI | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL VM | [DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| SQL Server | Azure Synapse Analytics | | | |
-| RDS SQL | Azure SQL DB, MI, VM | [DMA](/sql/dma/dma-overview?view=sql-server-2017) | [DMA](/sql/dma/dma-overview?view=sql-server-2017)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| RDS SQL | Azure SQL DB, MI, VM | [DMA](/sql/dma/dma-overview) | [DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ispirer*](https://www.ispirer.com/solutions) | [Ispirer*](https://www.ispirer.com/solutions) | [DMS](https://azure.microsoft.com/services/database-migration/) | | MongoDB | Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Cassandra | Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
-| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | RDS MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/11/static/app-pgdump.html) | | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | RDS PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/11/static/app-pgdump.html) | | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017) |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant?view=sql-server-2017)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |
+| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/solutions) | [Ispirer*](https://www.ispirer.com/solutions) | | | | | | | |
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
Last updated 02/20/2020
# Migrate SQL Server Integration Services packages to an Azure SQL Managed Instance If you use SQL Server Integration Services (SSIS) and want to migrate your SSIS projects/packages from the source SSISDB hosted by SQL Server to the destination SSISDB hosted by an Azure SQL Managed Instance, you can use Azure Database Migration Service.
-If the version of SSIS you use is earlier than 2012 or you use non-SSISDB package store types, before migrating your SSIS projects/packages, you need to convert them by using the Integration Services Project Conversion Wizard, which can also be launched from SSMS. For more information, see the article [Converting projects to the project deployment model](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages?view=sql-server-2017#convert).
+If the version of SSIS you use is earlier than 2012 or you use non-SSISDB package store types, before migrating your SSIS projects/packages, you need to convert them by using the Integration Services Project Conversion Wizard, which can also be launched from SSMS. For more information, see the article [Converting projects to the project deployment model](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages#convert).
> [!NOTE] > Azure Database Migration Service (DMS) currently does not support Azure SQL Database as a target migration destination. To redeploy SSIS projects/packages to Azure SQL Database, see the article [Redeploy SQL Server Integration Services packages to Azure SQL Database](./how-to-migrate-ssis-packages.md).
To complete these steps, you need:
* To create a Microsoft Azure Virtual Network for the Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information, see the article [Network topologies for SQL Managed Instance migrations using Azure Database Migration Service]( https://aka.ms/dmsnetworkformi). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. * To ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* To configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access?view=sql-server-2017).
+* To configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
* To open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. * If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server. * If you're using a firewall appliance in front of your source databases, you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration, as well as files via SMB port 445.
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/how-to-migrate-ssis-packages.md
Last updated 02/20/2020
If you use SQL Server Integration Services (SSIS) and want to migrate your SSIS projects/packages from the source SSISDB hosted by SQL Server to the destination SSISDB hosted by Azure SQL Database, you can redeploy them using the Integration Services Deployment Wizard. You can launch the wizard from within SQL Server Management Studio (SSMS).
-If the version of SSIS you use is earlier than 2012, before redeploying your SSIS projects/packages into the project deployment model, you first need to convert them by using the Integration Services Project Conversion Wizard, which can also be launched from SSMS. For more information, see the article [Converting projects to the project deployment model](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages?view=sql-server-2017#convert).
+If the version of SSIS you use is earlier than 2012, before redeploying your SSIS projects/packages into the project deployment model, you first need to convert them by using the Integration Services Project Conversion Wizard, which can also be launched from SSMS. For more information, see the article [Converting projects to the project deployment model](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages#convert).
> [!NOTE] > The Azure Database Migration Service (DMS) currently does not support the migration of a source SSISDB to Azure SQL Database, but you can redeploy your SSIS projects/packages using the following process.
In this article, you learn how to:
To complete these steps, you need: * SSMS version 17.2 or later.
-* An instance of your target database server to host SSISDB. If you donΓÇÖt already have one, create a [logical SQL server](../azure-sql/database/logical-servers.md) (without a database) using the Azure portal by navigating to the SQL Server (logical server only) [form](https://ms.portal.azure.com/#create/Microsoft.SQLServer).
+* An instance of your target database server to host SSISDB. If you don't already have one, create a [logical SQL server](../azure-sql/database/logical-servers.md) (without a database) using the Azure portal by navigating to the SQL Server (logical server only) [form](https://ms.portal.azure.com/#create/Microsoft.SQLServer).
* SSIS must be provisioned in Azure Data Factory (ADF) containing Azure-SSIS Integration Runtime (IR) with the destination SSISDB hosted by SQL Database (as described in the article [Provision the Azure-SSIS Integration Runtime in Azure Data Factory](../data-factory/tutorial-deploy-ssis-packages-azure.md)). ## Assess source SSIS projects/packages
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/howto-sql-server-to-azure-sql-powershell.md
Finally, create and start Azure Database Migration task. Azure Database Migratio
### Create credential parameters for source and target
-Connection security credentials can be created as a [PSCredential](/dotnet/api/system.management.automation.pscredential?view=powershellsdk-1.1.0) object.
+Connection security credentials can be created as a [PSCredential](/dotnet/api/system.management.automation.pscredential) object.
The following example shows the creation of *PSCredential* objects for both source and target connections providing passwords as string variables *$sourcePassword* and *$targetPassword*.
Use the `New-AzDataMigrationTask` cmdlet to create and start a migration task. T
* *TaskName*. Name of task to be created. * *SourceConnection*. AzDmsConnInfo object representing source SQL Server connection. * *TargetConnection*. AzDmsConnInfo object representing target Azure SQL Database connection.
-* *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential?view=powershellsdk-1.1.0) object for connecting to source server.
-* *TargetCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential?view=powershellsdk-1.1.0) object for connecting to target server.
+* *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to source server.
+* *TargetCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to target server.
* *SelectedDatabase*. AzDataMigrationSelectedDB object representing the source and target database mapping. * *SchemaValidation*. (optional, switch parameter) Following the migration, performs a comparison of the schema information between source and target. * *DataIntegrityValidation*. (optional, switch parameter) Following the migration, performs a checksum-based data integrity validation between source and target.
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
Potential issues associated with connecting to a source SQL Server database and
| Error | Cause and troubleshooting detail | | - | - |
-| SQL connection failed. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct, and that SQL Server is configured to allow remote connections.<br> | This error occurs if the service canΓÇÖt locate the source server. To address the issue, see the article [Error connecting to source SQL Server when using dynamic port or named instance](./known-issues-troubleshooting-dms.md#error-connecting-to-source-sql-server-when-using-dynamic-port-or-named-instance). |
-| **Error 53** - SQL connection failed. (Also, for error codes 1, 2, 5, 53, 233, 258, 1225, 11001)<br><br> | This error occurs if the service canΓÇÖt connect to the source server. To address the issue, refer to the following resources, and then try again. <br><br> [Interactive user guide to troubleshoot the connectivity issue](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server)<br><br> [Prerequisites for migrating SQL Server to Azure SQL Database](./pre-reqs.md#prerequisites-for-migrating-sql-server-to-azure-sql-managed-instance) <br><br> [Prerequisites for migrating SQL Server to an Azure SQL Managed Instance](./pre-reqs.md#prerequisites-for-migrating-sql-server-to-azure-sql-managed-instance) |
-| **Error 18456** - Login failed.<br> | This error occurs if the service canΓÇÖt connect to the source database using the provided T-SQL credentials. To address the issue, verify the entered credentials. You can also refer to [MSSQLSERVER_18456](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error?view=sql-server-2017) or to the troubleshooting documents listed in the note below this table, and then try again. |
+| SQL connection failed. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct, and that SQL Server is configured to allow remote connections.<br> | This error occurs if the service can't locate the source server. To address the issue, see the article [Error connecting to source SQL Server when using dynamic port or named instance](./known-issues-troubleshooting-dms.md#error-connecting-to-source-sql-server-when-using-dynamic-port-or-named-instance). |
+| **Error 53** - SQL connection failed. (Also, for error codes 1, 2, 5, 53, 233, 258, 1225, 11001)<br><br> | This error occurs if the service can't connect to the source server. To address the issue, refer to the following resources, and then try again. <br><br> [Interactive user guide to troubleshoot the connectivity issue](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server)<br><br> [Prerequisites for migrating SQL Server to Azure SQL Database](./pre-reqs.md#prerequisites-for-migrating-sql-server-to-azure-sql-managed-instance) <br><br> [Prerequisites for migrating SQL Server to an Azure SQL Managed Instance](./pre-reqs.md#prerequisites-for-migrating-sql-server-to-azure-sql-managed-instance) |
+| **Error 18456** - Login failed.<br> | This error occurs if the service can't connect to the source database using the provided T-SQL credentials. To address the issue, verify the entered credentials. You can also refer to [MSSQLSERVER_18456](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error) or to the troubleshooting documents listed in the note below this table, and then try again. |
| Malformed AccountName value '{0}' provided. Expected format for AccountName is DomainName\UserName<br> | This error occurs if the user selects Windows authentication but provides the username in an invalid format. To address the issue, either provide username in the correct format for Windows authentication or select **SQL Authentication**. | ## AWS RDS MySQL
Potential issues associated with connecting to a source AWS RDS MySQL database a
> [!NOTE] > For more information about troubleshooting issues related to connecting to a source AWS RDS MySQL database, see the following resources:
-> * [Troubleshooting for Amazon RDS Connectivity issues](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Connecting)
-> * [How do I resolve problems connecting to my Amazon RDS database instance?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect)
+> * [Troubleshooting for Amazon RDS Connectivity issues](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Connecting)
+> * [How do I resolve problems connecting to my Amazon RDS database instance?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect)
## AWS RDS PostgreSQL
Potential issues associated with connecting to a source AWS RDS PostgreSQL datab
| Error | Cause and troubleshooting detail | | - | - |
-| **Error [101]**[08001] - connection failed. ERROR [08001] timeout expired. | This error occurs if the Postgres driver canΓÇÖt connect to the source server. To address the issue, refer to the troubleshooting documents listed in the note below this table, and then try again. |
+| **Error [101]**[08001] - connection failed. ERROR [08001] timeout expired. | This error occurs if the Postgres driver can't connect to the source server. To address the issue, refer to the troubleshooting documents listed in the note below this table, and then try again. |
| **Error: Parameter wal_level has value '{value}'. Please change it to 'logical' to allow replication.** | This error occurs if the parameter wal_level has the wrong value. To address the issue, change the rds.logical_replication in parameter group to 1, and then reboot the instance. For more information, see to [Pre-requisites for migrating to Azure PostgreSQL using DMS](./tutorial-postgresql-azure-postgresql-online.md#prerequisites) or [PostgreSQL on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html). | > [!NOTE] > For more information about troubleshooting issues related to connecting to a source AWS RDS PostgreSQL database, see the following resources:
-> * [Troubleshooting for Amazon RDS Connectivity issues](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Connecting)
-> * [How do I resolve problems connecting to my Amazon RDS database instance?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect)
+> * [Troubleshooting for Amazon RDS Connectivity issues](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Connecting)
+> * [How do I resolve problems connecting to my Amazon RDS database instance?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect)
## AWS RDS SQL Server
Potential issues associated with connecting to a source AWS RDS SQL Server datab
| Error | Cause and troubleshooting detail | | - | - |
-| **Error 53** - SQL connection failed. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server wasn't found or wasn't accessible. Verify that the instance name is correct, and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server | This error occurs if the service canΓÇÖt connect to the source server. To address the issue, refer to the troubleshooting documents listed in the note below this table, and then try again. |
-| **Error 18456** - Login failed. Login failed for user '{user}' | This error occurs if the service canΓÇÖt connect to the source database with the T-SQL credentials provided. To address the issue, verify the entered credentials. You can also refer to [MSSQLSERVER_18456](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error?view=sql-server-2017) or to the troubleshooting documents listed in the note below this table, and try again. |
-| **Error 87** - Connection string is not valid. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct, and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid) | This error occurs if the service canΓÇÖt connect to the source server because of an invalid connection string. To address the issue, verify the connection string provided. If the issue persists, refer to the troubleshooting documents listed in the note below this table, and then try again. |
+| **Error 53** - SQL connection failed. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server wasn't found or wasn't accessible. Verify that the instance name is correct, and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server | This error occurs if the service can't connect to the source server. To address the issue, refer to the troubleshooting documents listed in the note below this table, and then try again. |
+| **Error 18456** - Login failed. Login failed for user '{user}' | This error occurs if the service can't connect to the source database with the T-SQL credentials provided. To address the issue, verify the entered credentials. You can also refer to [MSSQLSERVER_18456](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error) or to the troubleshooting documents listed in the note below this table, and try again. |
+| **Error 87** - Connection string is not valid. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct, and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid) | This error occurs if the service can't connect to the source server because of an invalid connection string. To address the issue, verify the connection string provided. If the issue persists, refer to the troubleshooting documents listed in the note below this table, and then try again. |
| **Error - Server certificate not trusted.** A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.) | This error occurs if the certificate used isn't trusted. To address the issue, you need to find a certificate that can be trusted, and then enable it on the server. Alternatively, you can select the Trust Certificate option while connecting. Take this action only if you're familiar with the certificate used and you trust it. <br> TLS connections that are encrypted using a self-signed certificate don't provide strong security -- they're susceptible to man-in-the-middle attacks. Do not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet. <br> For more information, see to [Using SSL with a Microsoft SQL Server DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.SSL.Using.html) or [Tutorial: Migrate RDS SQL Server to Azure using DMS](./index.yml). |
-| **Error 300** - User does not have required permissions. VIEW SERVER STATE permission was denied on object '{server}', database '{database}' | This error occurs if user doesn't have permission to perform the migration. To address the issue, refer to [GRANT Server Permissions - Transact-SQL](/sql/t-sql/statements/grant-server-permissions-transact-sql?view=sql-server-2017) or [Tutorial: Migrate RDS SQL Server to Azure using DMS](./index.yml) for more details. |
+| **Error 300** - User does not have required permissions. VIEW SERVER STATE permission was denied on object '{server}', database '{database}' | This error occurs if user doesn't have permission to perform the migration. To address the issue, refer to [GRANT Server Permissions - Transact-SQL](/sql/t-sql/statements/grant-server-permissions-transact-sql) or [Tutorial: Migrate RDS SQL Server to Azure using DMS](./index.yml) for more details. |
> [!NOTE] > For more information about troubleshooting issues related to connecting to a source AWS RDS SQL Server, see the following resources: >
-> * [Solving Connectivity errors to SQL Server](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server)
+> * [Solving Connectivity errors to SQL Server](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server)
> * [How do I resolve problems connecting to my Amazon RDS database instance?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect) ## Known issues
Potential issues associated with connecting to a source AWS RDS SQL Server datab
## Next steps
-* View the article [Azure Database Migration Service PowerShell](/powershell/module/azurerm.datamigration/?view=azurermps-6.13.0#data_migration).
+* View the article [Azure Database Migration Service PowerShell](/powershell/module/azurerm.datamigration/?view=azurermps-6.13.0&preserve-view=true#data_migration).
* View the article [How to configure server parameters in Azure Database for MySQL by using the Azure portal](../mysql/howto-server-parameters.md). * View the article [Overview of prerequisites for using Azure Database Migration Service](./pre-reqs.md). * See the [FAQ about using Azure Database Migration Service](./faq.md).
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-troubleshooting-dms.md
When you try to connect Azure Database Migration Service to SQL Server source th
## Next steps
-* View the article [Azure Database Migration Service PowerShell](/powershell/module/azurerm.datamigration/?view=azurermps-6.13.0#data_migration).
+* View the article [Azure Database Migration Service PowerShell](/powershell/module/azurerm.datamigration#data_migration).
* View the article [How to configure server parameters in Azure Database for MySQL by using the Azure portal](../mysql/howto-server-parameters.md). * View the article [Overview of prerequisites for using Azure Database Migration Service](./pre-reqs.md). * See the [FAQ about using Azure Database Migration Service](./faq.md).
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-migration-guide.md
This step will delete the legacy DNS zones and should be executed only after you
If you're using automation including templates, PowerShell scripts or custom code developed using SDK, you must update your automation to use the new resource model for the private DNS zones. Below are the links to new private DNS CLI/PS/SDK documentation. * [Azure DNS private zones REST API](/rest/api/dns/privatedns/privatezones)
-* [Azure DNS private zones CLI](/cli/azure/ext/privatedns/network/private-dns)
+* [Azure DNS private zones CLI](/cli/azure/network/private-dns/link/vnet?view=azure-cli-latest)
* [Azure DNS private zones PowerShell](/powershell/module/az.privatedns/) * [Azure DNS private zones SDK](/dotnet/api/overview/azure/privatedns/management?view=azure-dotnet-preview)
Create a support ticket if you need further help with the migration process or b
* Learn about DNS zones and records by visiting [DNS zones and records overview](dns-zones-records.md).
-* Learn about some of the other key [networking capabilities](../networking/networking-overview.md) of Azure.
+* Learn about some of the other key [networking capabilities](../networking/networking-overview.md) of Azure.
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/iot-fhir-portal-quickstart.md
On the **Device mapping** page, add the following script to the JSON editor and
"templateType": "IotJsonPathContent", "template": { "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@Body.HeartRate)]",
- "patientIdExpression": "$.SystemProperties.iothub-connection-device-id",
+ "typeMatchExpression": "$..[?(@Body.telemetry.HeartRate)]",
+ "patientIdExpression": "$.Properties.iotcentral-device-id",
"values": [ { "required": "true",
- "valueExpression": "$.Body.HeartRate",
+ "valueExpression": "$.Body.telemetry.HeartRate",
"valueName": "hr" } ]
iot-accelerators Howto Opc Publisher Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-configure.md
+
+ Title: Configure OPC Publisher - Azure | Microsoft Docs
+description: This article describes how to configure OPC Publisher to specify OPC UA node data changes, OPC UA events to publish and also the telemetry format.
++ Last updated : 06/10/2019+++++++
+# Configure OPC Publisher
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+You can configure OPC Publisher to specify:
+
+- The OPC UA node data changes to publish.
+- The OPC UA events to publish.
+- The telemetry format.
+
+You can configure OPC Publisher using configuration files or using method calls.
+
+## Use configuration files
+
+This section describes to options for configuring OPC UA node publishing with configuration files.
+
+### Use a configuration file to configure publishing data changes
+
+The easiest way to configure the OPC UA nodes to publish is with a configuration file. The configuration file format is documented in [publishednodes.json](https://github.com/Azure/iot-edge-opc-publisher/blob/master/opcpublisher/publishednodes.json) in the repository.
+
+Configuration file syntax has changed over time. OPC Publisher still reads old formats, but converts them into the latest format when it persists the configuration.
+
+The following example shows the format of the configuration file:
+
+```json
+[
+ {
+ "EndpointUrl": "opc.tcp://testserver:62541/Quickstarts/ReferenceServer",
+ "UseSecurity": true,
+ "OpcNodes": [
+ {
+ "Id": "i=2258",
+ "OpcSamplingInterval": 2000,
+ "OpcPublishingInterval": 5000,
+ "DisplayName": "Current time"
+ }
+ ]
+ }
+]
+```
+
+### Use a configuration file to configure publishing events
+
+To publish OPC UA events, you use the same configuration file as for data changes.
+
+The following example shows how to configure publishing for events generated by the [SimpleEvents server](https://github.com/OPCFoundation/UA-.NETStandard-Samples/tree/master/Workshop/SimpleEvents/Server). The SimpleEvents server can be found in the [OPC Foundation repository](https://github.com/OPCFoundation/UA-.NETStandard-Samples)
+is:
+
+```json
+[
+ {
+ "EndpointUrl": "opc.tcp://testserver:62563/Quickstarts/SimpleEventsServer",
+ "OpcEvents": [
+ {
+ "Id": "i=2253",
+ "DisplayName": "SimpleEventServerEvents",
+ "SelectClauses": [
+ {
+ "TypeId": "i=2041",
+ "BrowsePaths": [
+ "EventId"
+ ]
+ },
+ {
+ "TypeId": "i=2041",
+ "BrowsePaths": [
+ "Message"
+ ]
+ },
+ {
+ "TypeId": "nsu=http://opcfoundation.org/Quickstarts/SimpleEvents;i=235",
+ "BrowsePaths": [
+ "/2:CycleId"
+ ]
+ },
+ {
+ "TypeId": "nsu=http://opcfoundation.org/Quickstarts/SimpleEvents;i=235",
+ "BrowsePaths": [
+ "/2:CurrentStep"
+ ]
+ }
+ ],
+ "WhereClause": [
+ {
+ "Operator": "OfType",
+ "Operands": [
+ {
+ "Literal": "ns=2;i=235"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+]
+```
+
+## Use method calls
+
+This section describes the method calls you can use to configure OPC Publisher.
+
+### Configure using OPC UA method calls
+
+OPC Publisher includes an OPC UA Server, which can be accessed on port 62222. If the hostname is **publisher**, then the endpoint URI is: `opc.tcp://publisher:62222/UA/Publisher`.
+
+This endpoint exposes the following four methods:
+
+- PublishNode
+- UnpublishNode
+- GetPublishedNodes
+- IoT HubDirectMethod
+
+### Configure using IoT Hub direct method calls
+
+OPC Publisher implements the following IoT Hub direct method calls:
+
+- PublishNodes
+- UnpublishNodes
+- UnpublishAllNodes
+- GetConfiguredEndpoints
+- GetConfiguredNodesOnEndpoint
+- GetDiagnosticInfo
+- GetDiagnosticLog
+- GetDiagnosticStartupLog
+- ExitApplication
+- GetInfo
+
+The format of the JSON payload of the method request and responses are defined in [opcpublisher/HubMethodModel.cs](https://github.com/Azure/iot-edge-opc-publisher/tree/master/opcpublisher).
+
+If you call an unknown method on the module, it responds with a string that says the method isn't implemented. You can call an unknown method as a way to ping the module.
+
+### Configure username and password for authentication
+
+The authentication mode can be set through an IoT Hub direct method calls. The payload must contain the property **OpcAuthenticationMode** and the username and password:
+
+```csharp
+{
+ "EndpointUrl": "<Url of the endpoint to set authentication settings>",
+ "OpcAuthenticationMode": "UsernamePassword",
+ "Username": "<Username>",
+ "Password": "<Password>"
+ ...
+}
+```
+
+The password is encrypted by the IoT Hub Workload Client and stored in the publisher's configuration. To change authentication back to anonymous, use the method with the following payload:
+
+```csharp
+{
+ "EndpointUrl": "<Url of the endpoint to set authentication settings>",
+ "OpcAuthenticationMode": "Anonymous"
+ ...
+}
+```
+
+If the **OpcAuthenticationMode** property isn't set in the payload, the authentication settings remain unchanged in the configuration.
+
+## Configure telemetry publishing
+
+When OPC Publisher receives a notification of a value change in a published node, it generates a JSON formatted message that's sent to IoT Hub.
+
+You can configure the content of this JSON formatted message using a configuration file. If no configuration file is specified with the `--tc` option, a default configuration is used that's compatible with the [Connected factory solution accelerator](https://github.com/Azure/azure-iot-connected-factory).
+
+If OPC Publisher is configured to batch messages, then they're sent as a valid JSON array.
+
+The telemetry is derived from the following sources:
+
+- The OPC Publisher node configuration for the node
+- The **MonitoredItem** object of the OPC UA stack for which OPC Publisher got a notification.
+- The argument passed to this notification, which provides details on the data value change.
+
+The telemetry that's put into the JSON formatted message is a selection of important properties of these objects. If you need more properties, you need to change the OPC Publisher code base.
+
+The syntax of the configuration file is as follows:
+
+```json
+// The configuration settings file consists of two objects:
+// 1) The 'Defaults' object, which defines defaults for the telemetry configuration
+// 2) An array 'EndpointSpecific' of endpoint specific configuration
+// Both objects are optional and if they are not specified, then publisher uses
+// its internal default configuration, which generates telemetry messages compatible
+// with the Microsoft Connected factory Preconfigured Solution (https://github.com/Azure/azure-iot-connected-factory).
+
+// A JSON telemetry message for Connected factory looks like:
+// {
+// "NodeId": "i=2058",
+// "ApplicationUri": "urn:myopcserver",
+// "DisplayName": "CurrentTime",
+// "Value": {
+// "Value": "10.11.2017 14:03:17",
+// "SourceTimestamp": "2017-11-10T14:03:17Z"
+// }
+// }
+
+// The 'Defaults' object in the sample below, are similar to what publisher is
+// using as its internal default telemetry configuration.
+{
+ "Defaults": {
+ // The first two properties ('EndpointUrl' and 'NodeId' are configuring data
+ // taken from the OpcPublisher node configuration.
+ "EndpointUrl": {
+
+ // The following three properties can be used to configure the 'EndpointUrl'
+ // property in the JSON message send by publisher to IoT Hub.
+
+ // Publish controls if the property should be part of the JSON message at all.
+ "Publish": false,
+
+ // Pattern is a regular expression, which is applied to the actual value of the
+ // property (here 'EndpointUrl').
+ // If this key is omitted (which is the default), then no regex matching is done
+ // at all, which improves performance.
+ // If the key is used you need to define groups in the regular expression.
+ // Publisher applies the regular expression and then concatenates all groups
+ // found and use the resulting string as the value in the JSON message to
+ //sent to IoT Hub.
+ // This example mimics the default behaviour and defines a group,
+ // which matches the conplete value:
+ "Pattern": "(.*)",
+ // Here some more exaples for 'Pattern' values and the generated result:
+ // "Pattern": "i=(.*)"
+ // defined for Defaults.NodeId.Pattern, will generate for the above sample
+ // a 'NodeId' value of '2058'to be sent by publisher
+ // "Pattern": "(i)=(.*)"
+ // defined for Defaults.NodeId.Pattern, will generate for the above sample
+ // a 'NodeId' value of 'i2058' to be sent by publisher
+
+ // Name allows you to use a shorter string as property name in the JSON message
+ // sent by publisher. By default the property name is unchanged and will be
+ // here 'EndpointUrl'.
+ // The 'Name' property can only be set in the 'Defaults' object to ensure
+ // all messages from publisher sent to IoT Hub have a similar layout.
+ "Name": "EndpointUrl"
+
+ },
+ "NodeId": {
+ "Publish": true,
+
+ // If you set Defaults.NodeId.Name to "ni", then the "NodeId" key/value pair
+ // (from the above example) will change to:
+ // "ni": "i=2058",
+ "Name": "NodeId"
+ },
+
+ // The MonitoredItem object is configuring the data taken from the MonitoredItem
+ // OPC UA object for published nodes.
+ "MonitoredItem": {
+
+ // If you set the Defaults.MonitoredItem.Flat to 'false', then a
+ // 'MonitoredItem' object will appear, which contains 'ApplicationUri'
+ // and 'DisplayNode' proerties:
+ // "NodeId": "i=2058",
+ // "MonitoredItem": {
+ // "ApplicationUri": "urn:myopcserver",
+ // "DisplayName": "CurrentTime",
+ // }
+ // The 'Flat' property can only be used in the 'MonitoredItem' and
+ // 'Value' objects of the 'Defaults' object and will be used
+ // for all JSON messages sent by publisher.
+ "Flat": true,
+
+ "ApplicationUri": {
+ "Publish": true,
+ "Name": "ApplicationUri"
+ },
+ "DisplayName": {
+ "Publish": true,
+ "Name": "DisplayName"
+ }
+ },
+ // The Value object is configuring the properties taken from the event object
+ // the OPC UA stack provided in the value change notification event.
+ "Value": {
+ // If you set the Defaults.Value.Flat to 'true', then the 'Value'
+ // object will disappear completely and the 'Value' and 'SourceTimestamp'
+ // members won't be nested:
+ // "DisplayName": "CurrentTime",
+ // "Value": "10.11.2017 14:03:17",
+ // "SourceTimestamp": "2017-11-10T14:03:17Z"
+ // The 'Flat' property can only be used for the 'MonitoredItem' and 'Value'
+ // objects of the 'Defaults' object and will be used for all
+ // messages sent by publisher.
+ "Flat": false,
+
+ "Value": {
+ "Publish": true,
+ "Name": "Value"
+ },
+ "SourceTimestamp": {
+ "Publish": true,
+ "Name": "SourceTimestamp"
+ },
+ // 'StatusCode' is the 32 bit OPC UA status code
+ "StatusCode": {
+ "Publish": false,
+ "Name": "StatusCode"
+ // 'Pattern' is ignored for the 'StatusCode' value
+ },
+ // 'Status' is the symbolic name of 'StatusCode'
+ "Status": {
+ "Publish": false,
+ "Name": "Status"
+ }
+ }
+ },
+
+ // The next object allows to configure 'Publish' and 'Pattern' for specific
+ // endpoint URLs. Those will overwrite the ones specified in the 'Defaults' object
+ // or the defaults used by publisher.
+ // It is not allowed to specify 'Name' and 'Flat' properties in this object.
+ "EndpointSpecific": [
+ // The following shows how a endpoint specific configuration can look like:
+ {
+ // 'ForEndpointUrl' allows to configure for which OPC UA server this
+ // object applies and is a required property for all objects in the
+ // 'EndpointSpecific' array.
+ // The value of 'ForEndpointUrl' must be an 'EndpointUrl' configured in
+ // the publishednodes.json confguration file.
+ "ForEndpointUrl": "opc.tcp://<your_opcua_server>:<your_opcua_server_port>/<your_opcua_server_path>",
+ "EndpointUrl": {
+ // We overwrite the default behaviour and publish the
+ // endpoint URL in this case.
+ "Publish": true,
+ // We are only interested in the URL part following the 'opc.tcp://' prefix
+ // and define a group matching this.
+ "Pattern": "opc.tcp://(.*)"
+ },
+ "NodeId": {
+ // We are not interested in the configured 'NodeId' value,
+ // so we do not publish it.
+ "Publish": false
+ // No 'Pattern' key is specified here, so the 'NodeId' value will be
+ // taken as specified in the publishednodes configuration file.
+ },
+ "MonitoredItem": {
+ "ApplicationUri": {
+ // We already publish the endpoint URL, so we do not want
+ // the ApplicationUri of the MonitoredItem to be published.
+ "Publish": false
+ },
+ "DisplayName": {
+ "Publish": true
+ }
+ },
+ "Value": {
+ "Value": {
+ // The value of the node is important for us, everything else we
+ // are not interested in to keep the data ingest as small as possible.
+ "Publish": true
+ },
+ "SourceTimestamp": {
+ "Publish": false
+ },
+ "StatusCode": {
+ "Publish": false
+ },
+ "Status": {
+ "Publish": false
+ }
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+Now you've learned how to configure OPC Publisher, the suggested next step is to learn how to [Run OPC Publisher](howto-opc-publisher-run.md).
iot-accelerators Howto Opc Publisher Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-run.md
+
+ Title: Run OPC Publisher - Azure | Microsoft Docs
+description: This article describes how to run and debug OPC Publisher. It also addresses performance and memory considerations.
++ Last updated : 06/10/2019+++++++
+# Run OPC Publisher
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article describes how to run ad debug OPC Publisher. It also addresses performance and memory considerations.
+
+## Command-line options
+
+Application usage is shown using the `--help` command-line option as follows:
+
+```sh/cmd
+Current directory is: /appdata
+Log file is: <hostname>-publisher.log
+Log level is: info
+
+OPC Publisher V2.3.0
+Informational version: V2.3.0+Branch.develop_hans_methodlog.Sha.0985e54f01a0b0d7f143b1248936022ea5d749f9
+
+Usage: opcpublisher.exe <applicationname> [<IoT Hubconnectionstring>] [<options>]
+
+OPC Edge Publisher to subscribe to configured OPC UA servers and send telemetry to Azure IoT Hub.
+To exit the application, just press CTRL-C while it is running.
+
+applicationname: the OPC UA application name to use, required
+ The application name is also used to register the publisher under this name in the
+ IoT Hub device registry.
+
+IoT Hubconnectionstring: the IoT Hub owner connectionstring, optional
+
+There are a couple of environment variables which can be used to control the application:
+_HUB_CS: sets the IoT Hub owner connectionstring
+_GW_LOGP: sets the filename of the log file to use
+_TPC_SP: sets the path to store certificates of trusted stations
+_GW_PNFP: sets the filename of the publishing configuration file
+
+Command line arguments overrule environment variable settings.
+
+Options:
+ --pf, --publishfile=VALUE
+ the filename to configure the nodes to publish.
+ Default: '/appdata/publishednodes.json'
+ --tc, --telemetryconfigfile=VALUE
+ the filename to configure the ingested telemetry
+ Default: ''
+ -s, --site=VALUE the site OPC Publisher is working in. if specified
+ this domain is appended (delimited by a ':' to
+ the 'ApplicationURI' property when telemetry is
+ sent to IoT Hub.
+ The value must follow the syntactical rules of a
+ DNS hostname.
+ Default: not set
+ --ic, --iotcentral publisher will send OPC UA data in IoTCentral
+ compatible format (DisplayName of a node is used
+ as key, this key is the Field name in IoTCentral)
+ . you need to ensure that all DisplayName's are
+ unique. (Auto enables fetch display name)
+ Default: False
+ --sw, --sessionconnectwait=VALUE
+ specify the wait time in seconds publisher is
+ trying to connect to disconnected endpoints and
+ starts monitoring unmonitored items
+ Min: 10
+ Default: 10
+ --mq, --monitoreditemqueuecapacity=VALUE
+ specify how many notifications of monitored items
+ can be stored in the internal queue, if the data
+ can not be sent quick enough to IoT Hub
+ Min: 1024
+ Default: 8192
+ --di, --diagnosticsinterval=VALUE
+ shows publisher diagnostic info at the specified
+ interval in seconds (need log level info).
+ -1 disables remote diagnostic log and diagnostic
+ output
+ 0 disables diagnostic output
+ Default: 0
+ --ns, --noshutdown=VALUE
+ same as runforever.
+ Default: False
+ --rf, --runforever publisher can not be stopped by pressing a key on
+ the console, but will run forever.
+ Default: False
+ --lf, --logfile=VALUE the filename of the logfile to use.
+ Default: './<hostname>-publisher.log'
+ --lt, --logflushtimespan=VALUE
+ the timespan in seconds when the logfile should be
+ flushed.
+ Default: 00:00:30 sec
+ --ll, --loglevel=VALUE the loglevel to use (allowed: fatal, error, warn,
+ info, debug, verbose).
+ Default: info
+ --ih, --IoT Hubprotocol=VALUE
+ the protocol to use for communication with IoT Hub (
+ allowed values: Amqp, Http1, Amqp_WebSocket_Only,
+ Amqp_Tcp_Only, Mqtt, Mqtt_WebSocket_Only, Mqtt_
+ Tcp_Only) or IoT EdgeHub (allowed values: Mqtt_
+ Tcp_Only, Amqp_Tcp_Only).
+ Default for IoT Hub: Mqtt_WebSocket_Only
+ Default for IoT EdgeHub: Amqp_Tcp_Only
+ --ms, --IoT Hubmessagesize=VALUE
+ the max size of a message which can be send to
+ IoT Hub. when telemetry of this size is available
+ it will be sent.
+ 0 will enforce immediate send when telemetry is
+ available
+ Min: 0
+ Max: 262144
+ Default: 262144
+ --si, --IoT Hubsendinterval=VALUE
+ the interval in seconds when telemetry should be
+ send to IoT Hub. If 0, then only the
+ IoT Hubmessagesize parameter controls when
+ telemetry is sent.
+ Default: '10'
+ --dc, --deviceconnectionstring=VALUE
+ if publisher is not able to register itself with
+ IoT Hub, you can create a device with name <
+ applicationname> manually and pass in the
+ connectionstring of this device.
+ Default: none
+ -c, --connectionstring=VALUE
+ the IoT Hub owner connectionstring.
+ Default: none
+ --hb, --heartbeatinterval=VALUE
+ the publisher is using this as default value in
+ seconds for the heartbeat interval setting of
+ nodes without
+ a heartbeat interval setting.
+ Default: 0
+ --sf, --skipfirstevent=VALUE
+ the publisher is using this as default value for
+ the skip first event setting of nodes without
+ a skip first event setting.
+ Default: False
+ --pn, --portnum=VALUE the server port of the publisher OPC server
+ endpoint.
+ Default: 62222
+ --pa, --path=VALUE the enpoint URL path part of the publisher OPC
+ server endpoint.
+ Default: '/UA/Publisher'
+ --lr, --ldsreginterval=VALUE
+ the LDS(-ME) registration interval in ms. If 0,
+ then the registration is disabled.
+ Default: 0
+ --ol, --opcmaxstringlen=VALUE
+ the max length of a string opc can transmit/
+ receive.
+ Default: 131072
+ --ot, --operationtimeout=VALUE
+ the operation timeout of the publisher OPC UA
+ client in ms.
+ Default: 120000
+ --oi, --opcsamplinginterval=VALUE
+ the publisher is using this as default value in
+ milliseconds to request the servers to sample
+ the nodes with this interval
+ this value might be revised by the OPC UA
+ servers to a supported sampling interval.
+ please check the OPC UA specification for
+ details how this is handled by the OPC UA stack.
+ a negative value will set the sampling interval
+ to the publishing interval of the subscription
+ this node is on.
+ 0 will configure the OPC UA server to sample in
+ the highest possible resolution and should be
+ taken with care.
+ Default: 1000
+ --op, --opcpublishinginterval=VALUE
+ the publisher is using this as default value in
+ milliseconds for the publishing interval setting
+ of the subscriptions established to the OPC UA
+ servers.
+ please check the OPC UA specification for
+ details how this is handled by the OPC UA stack.
+ a value less than or equal zero will let the
+ server revise the publishing interval.
+ Default: 0
+ --ct, --createsessiontimeout=VALUE
+ specify the timeout in seconds used when creating
+ a session to an endpoint. On unsuccessful
+ connection attemps a backoff up to 5 times the
+ specified timeout value is used.
+ Min: 1
+ Default: 10
+ --ki, --keepaliveinterval=VALUE
+ specify the interval in seconds the publisher is
+ sending keep alive messages to the OPC servers
+ on the endpoints it is connected to.
+ Min: 2
+ Default: 2
+ --kt, --keepalivethreshold=VALUE
+ specify the number of keep alive packets a server
+ can miss, before the session is disconneced
+ Min: 1
+ Default: 5
+ --aa, --autoaccept the publisher trusts all servers it is
+ establishing a connection to.
+ Default: False
+ --tm, --trustmyself=VALUE
+ same as trustowncert.
+ Default: False
+ --to, --trustowncert the publisher certificate is put into the trusted
+ certificate store automatically.
+ Default: False
+ --fd, --fetchdisplayname=VALUE
+ same as fetchname.
+ Default: False
+ --fn, --fetchname enable to read the display name of a published
+ node from the server. this will increase the
+ runtime.
+ Default: False
+ --ss, --suppressedopcstatuscodes=VALUE
+ specifies the OPC UA status codes for which no
+ events should be generated.
+ Default: BadNoCommunication,
+ BadWaitingForInitialData
+ --at, --appcertstoretype=VALUE
+ the own application cert store type.
+ (allowed values: Directory, X509Store)
+ Default: 'Directory'
+ --ap, --appcertstorepath=VALUE
+ the path where the own application cert should be
+ stored
+ Default (depends on store type):
+ X509Store: 'CurrentUser\UA_MachineDefault'
+ Directory: 'pki/own'
+ --tp, --trustedcertstorepath=VALUE
+ the path of the trusted cert store
+ Default: 'pki/trusted'
+ --rp, --rejectedcertstorepath=VALUE
+ the path of the rejected cert store
+ Default 'pki/rejected'
+ --ip, --issuercertstorepath=VALUE
+ the path of the trusted issuer cert store
+ Default 'pki/issuer'
+ --csr show data to create a certificate signing request
+ Default 'False'
+ --ab, --applicationcertbase64=VALUE
+ update/set this applications certificate with the
+ certificate passed in as bas64 string
+ --af, --applicationcertfile=VALUE
+ update/set this applications certificate with the
+ certificate file specified
+ --pb, --privatekeybase64=VALUE
+ initial provisioning of the application
+ certificate (with a PEM or PFX fomat) requires a
+ private key passed in as base64 string
+ --pk, --privatekeyfile=VALUE
+ initial provisioning of the application
+ certificate (with a PEM or PFX fomat) requires a
+ private key passed in as file
+ --cp, --certpassword=VALUE
+ the optional password for the PEM or PFX or the
+ installed application certificate
+ --tb, --addtrustedcertbase64=VALUE
+ adds the certificate to the applications trusted
+ cert store passed in as base64 string (multiple
+ strings supported)
+ --tf, --addtrustedcertfile=VALUE
+ adds the certificate file(s) to the applications
+ trusted cert store passed in as base64 string (
+ multiple filenames supported)
+ --ib, --addissuercertbase64=VALUE
+ adds the specified issuer certificate to the
+ applications trusted issuer cert store passed in
+ as base64 string (multiple strings supported)
+ --if, --addissuercertfile=VALUE
+ adds the specified issuer certificate file(s) to
+ the applications trusted issuer cert store (
+ multiple filenames supported)
+ --rb, --updatecrlbase64=VALUE
+ update the CRL passed in as base64 string to the
+ corresponding cert store (trusted or trusted
+ issuer)
+ --uc, --updatecrlfile=VALUE
+ update the CRL passed in as file to the
+ corresponding cert store (trusted or trusted
+ issuer)
+ --rc, --removecert=VALUE
+ remove cert(s) with the given thumbprint(s) (
+ multiple thumbprints supported)
+ --dt, --devicecertstoretype=VALUE
+ the IoT Hub device cert store type.
+ (allowed values: Directory, X509Store)
+ Default: X509Store
+ --dp, --devicecertstorepath=VALUE
+ the path of the iot device cert store
+ Default Default (depends on store type):
+ X509Store: 'My'
+ Directory: 'CertificateStores/IoT Hub'
+ -i, --install register OPC Publisher with IoT Hub and then exits.
+ Default: False
+ -h, --help show this message and exit
+ --st, --opcstacktracemask=VALUE
+ ignored, only supported for backward comaptibility.
+ --sd, --shopfloordomain=VALUE
+ same as site option, only there for backward
+ compatibility
+ The value must follow the syntactical rules of a
+ DNS hostname.
+ Default: not set
+ --vc, --verboseconsole=VALUE
+ ignored, only supported for backward comaptibility.
+ --as, --autotrustservercerts=VALUE
+ same as autoaccept, only supported for backward
+ cmpatibility.
+ Default: False
+ --tt, --trustedcertstoretype=VALUE
+ ignored, only supported for backward compatibility.
+ the trusted cert store will always reside in a
+ directory.
+ --rt, --rejectedcertstoretype=VALUE
+ ignored, only supported for backward compatibility.
+ the rejected cert store will always reside in a
+ directory.
+ --it, --issuercertstoretype=VALUE
+ ignored, only supported for backward compatibility.
+ the trusted issuer cert store will always
+ reside in a directory.
+```
+
+Typically you specify the IoT Hub owner connection string only on the first run of the application. The connection string is encrypted and stored in the platform certificate store. On later runs, the application reads the connection string from the certificate store. If you specify the connection string on each run, the device that's created for the application in the IoT Hub device registry is removed and recreated.
+
+## Run natively on Windows
+
+Open the **opcpublisher.sln** project with Visual Studio, build the solution, and publish it. You can start the application in the **Target directory** you published to as follows:
+
+```cmd
+dotnet opcpublisher.dll <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+## Use a self-built container
+
+Build your own container and start it as follows:
+
+```sh/cmd
+docker run <your-container-name> <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+## Use a container from Microsoft Container Registry
+
+There's a prebuilt container available in the Microsoft Container Registry. Start it as follows:
+
+```sh/cmd
+docker run mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+Check [Docker Hub](https://hub.docker.com/_/microsoft-iotedge-opc-publisher) to see the supported operating systems and processor architectures. If your OS and CPU architecture is supported, Docker automatically selects the correct container.
+
+## Run as an Azure IoT Edge module
+
+OPC Publisher is ready to be used as an [Azure IoT Edge](../iot-edge/index.yml) module. When you use OPC Publisher as IoT Edge module, the only supported transport protocols are **Amqp_Tcp_Only** and **Mqtt_Tcp_Only**.
+
+To add OPC Publisher as module to your IoT Edge deployment, go to your IoT Hub settings in the Azure portal and complete the following steps:
+
+1. Go to **IoT Edge** and create or select your IoT Edge device.
+1. Select **Set Modules**.
+1. Select **Add** under **Deployment Modules** and then **IoT Edge Module**.
+1. In the **Name** field, enter **publisher**.
+1. In the **Image URI** field, enter `mcr.microsoft.com/iotedge/opc-publisher:<tag>`
+1. You can find the available tags on [Docker Hub](https://hub.docker.com/_/microsoft-iotedge-opc-publisher)
+1. Paste the following JSON into the **Container Create Options** field:
+
+ ```json
+ {
+ "Hostname": "publisher",
+ "Cmd": [
+ "--aa"
+ ]
+ }
+ ```
+
+ This configuration configures IoT Edge to start a container called **publisher** using the OPC Publisher image. The hostname of the container's system is set to **publisher**. OPC Publisher is called with the following command-line argument: `--aa`. With this option, OPC Publisher trusts the certificates of the OPC UA servers it connects to. You can use any OPC Publisher command-line options. The only limitation is the size of the **Container Create Options** supported by IoT Edge.
+
+1. Leave the other settings unchanged and select **Save**.
+1. If you want to process the output of the OPC Publisher locally with another IoT Edge module, go back to the **Set Modules** page. Then go to the **Specify Routes** tab, and add a new route that looks like the following JSON:
+
+ ```json
+ {
+ "routes": {
+ "processingModuleToIoT Hub": "FROM /messages/modules/processingModule/outputs/* INTO $upstream",
+ "opcPublisherToProcessingModule": "FROM /messages/modules/publisher INTO BrokeredEndpoint(\"/modules/processingModule/inputs/input1\")"
+ }
+ }
+ ```
+
+1. Back in the **Set Modules** page, select **Next**, until you reach the last page of the configuration.
+1. Select **Submit** to send your configuration to IoT Edge.
+1. When you've started IoT Edge on your edge device and the docker container **publisher** is running, you can check out the log output of OPC Publisher either by
+ using `docker logs -f publisher` or by checking the logfile. In the previous example, the log file is above `d:\iiotegde\publisher-publisher.log`. You can also use the [iot-edge-opc-publisher-diagnostics tool](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics).
+
+### Make the configuration files accessible on the host
+
+To make the IoT Edge module configuration files accessible in the host file system, use the following **Container Create Options**. The following example is of a deployment using Linux Containers for Windows:
+
+```json
+{
+ "Hostname": "publisher",
+ "Cmd": [
+ "--pf=./pn.json",
+ "--aa"
+ ],
+ "HostConfig": {
+ "Binds": [
+ "d:/iiotedge:/appdata"
+ ]
+ }
+}
+```
+
+With these options, OPC Publisher reads the nodes it should publish from the file `./pn.json` and the container's working directory is set to `/appdata` at startup. With these settings, OPC Publisher reads the file `/appdata/pn.json` from the container to get its configuration. Without the `--pf` option, OPC Publisher tries to read the default configuration file `./publishednodes.json`.
+
+The log file, using the default name `publisher-publisher.log`, is written to `/appdata` and the `CertificateStores` directory is also created in this directory.
+
+To make all these files available in the host file system, the container configuration requires a bind mount volume. The `d://iiotedge:/appdata` bind maps the directory `/appdata`, which is the current working directory on container startup, to the host directory `d://iiotedge`. Without this option, no file data is persisted when the container next starts.
+
+If you're running Windows containers, then the syntax of the `Binds` parameter is different. At container startup, the working directory is `c:\appdata`. To put the configuration file in the directory `d:\iiotedge`on the host, specify the following mapping in the `HostConfig` section:
+
+```json
+"HostConfig": {
+ "Binds": [
+ "d:/iiotedge:c:/appdata"
+ ]
+}
+```
+
+If you're running Linux containers on Linux, the syntax of the `Binds` parameter is again different. At container startup, the working directory is `/appdata`. To put the configuration file in the directory `/iiotedge` on the host, specify the following mapping in the `HostConfig` section:
+
+```json
+"HostConfig": {
+ "Binds": [
+ "/iiotedge:/appdata"
+ ]
+}
+```
+
+## Considerations when using a container
+
+The following sections list some things to keep in mind when you use a container:
+
+### Access to the OPC Publisher OPC UA server
+
+By default, the OPC Publisher OPC UA server listens on port 62222. To expose this inbound port in a container, use the following command:
+
+```sh/cmd
+docker run -p 62222:62222 mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+### Enable intercontainer name resolution
+
+To enable name resolution from within the container to other containers, create a user define docker bridge network, and connect the container to this network using the `--network` option. Also assign the container a name using the `--name` option as follows:
+
+```sh/cmd
+docker network create -d bridge iot_edge
+docker run --network iot_edge --name publisher mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+The container is now reachable using the name `publisher` by other containers on the same network.
+
+### Access other systems from within the container
+
+Other containers can be reached using the parameters described in the previous section. If operating system on which Docker is hosted is DNS enabled, then accessing all systems that are known to DNS works.
+
+In networks that use NetBIOS name resolution, enable access to other systems by starting your container with the `--add-host` option. This option effectively adds an entry to the container's host file:
+
+```cmd/sh
+docker run --add-host mydevbox:192.168.178.23 mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+### Assign a hostname
+
+OPC Publisher uses the hostname of the machine it's running on for certificate and endpoint generation. Docker chooses a random hostname if one isn't set by the `-h` option. The following example shows how to set the internal hostname of the container to `publisher`:
+
+```sh/cmd
+docker run -h publisher mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
+```
+
+### Use bind mounts (shared filesystem)
+
+Instead of using the container file system, you may choose the host file system to store configuration information and log files. To configure this option, use the `-v` option of `docker run` in the bind mount mode.
+
+## OPC UA X.509 certificates
+
+OPC UA uses X.509 certificates to authenticate the OPC UA client and server when they establish a connection and to encrypt the communication between them. OPC Publisher uses certificate stores maintained by the OPC UA stack to manage all certificates. On startup, OPC Publisher checks if there's a certificate for itself. If there's no certificate in the certificate store, and one's not one passed in on the command-line, OPC Publisher creates a self-signed certificate. For more information, see the **InitApplicationSecurityAsync** method in `OpcApplicationConfigurationSecurity.cs`.
+
+Self-signed certificates don't provide any security, as they're not signed by a trusted CA.
+
+OPC Publisher provides command-line options to:
+
+- Retrieve CSR information of the current application certificate used by OPC Publisher.
+- Provision OPC Publisher with a CA signed certificate.
+- Provision OPC Publisher with a new key pair and matching CA signed certificate.
+- Add certificates to a trusted peer or trusted issuer certificate store.
+- Add a CRL.
+- Remove a certificate from the trusted peer or trusted issuers certificate store.
+
+All these options let you pass in parameters using files or base64 encoded strings.
+
+The default store type for all certificate stores is the file system, which you can change using command-line options. Because the container doesn't provide persistent storage in its file system, you must choose a different store type. Use the Docker `-v` option to persist the certificate stores in the host file system or on a Docker volume. If you use a Docker volume, you can pass in certificates using base64 encoded strings.
+
+The runtime environment affects how certificates are persisted. Avoid creating new certificate stores each time you run the application:
+
+- Running natively on Windows, you can't use an application certificate store of type `Directory` because access to the private key fails. In this case, use the option `--at X509Store`.
+- Running as Linux docker container, you can map the certificate stores to the host file system with the docker run option `-v <hostdirectory>:/appdata`. This option makes the certificate persistent across application runs.
+- Running as Linux docker container and you want to use an X509 store for the application certificate, use the docker run option `-v x509certstores:/root/.dotnet/corefx/cryptography/x509stores` and the application option `--at X509Store`
+
+## Performance and memory considerations
+
+This section discusses options for managing memory and performance:
+
+### Command-line parameters to control performance and memory
+
+When you run OPC Publisher, you need to be aware of your performance requirements and the memory resources available on your host.
+
+Memory and performance are interdependent and both depend on the configuration of how many nodes you configure to publish. Ensure that the following parameters meet your requirements:
+
+- IoT Hub sends interval: `--si`
+- IoT Hub message size (default `1`): `--ms`
+- Monitored items queue capacity: `--mq`
+
+The `--mq` parameter controls the upper bound of the capacity of the internal queue, which buffers all OPC node value change notifications. If OPC Publisher can't send messages to IoT Hub fast enough, this queue buffers the notifications. The parameter sets the number of notifications that can be buffered. If you see the number of items in this queue increasing in your test runs, then to avoid losing messages you should:
+
+- Reduce the IoT Hub send interval
+- Increase the IoT Hub message size
+
+The `--si` parameter forces OPC Publisher to send messages to IoT Hub at the specified interval. OPC Publisher sends a message as soon as the message size specified by the `--ms` parameter is reached, or as soon as the interval specified by the `--si` parameter is reached. To disable the message size option, use `--ms 0`. In this case, OPC Publisher uses the largest possible IoT Hub message size of 256 kB to batch data.
+
+The `--ms` parameter lets you batch messages sent to IoT Hub. The protocol you're using determines whether the overhead of sending a message to IoT Hub is high compared to the actual time of sending the payload. If your scenario allows for latency when data ingested by IoT Hub, configure OPC Publisher to use the largest message size of 256 kB.
+
+Before you use OPC Publisher in production scenarios, test the performance and memory usage under production conditions. You can use the `--di` parameter to specify the interval, in seconds, that OPC Publisher writes diagnostic information.
+
+### Test measurements
+
+The following example diagnostics show measurements with different values for `--si` and `--ms` parameters publishing 500 nodes with an OPC publishing interval of 1 second. The test used an OPC Publisher debug build on Windows 10 natively for 120 seconds. The IoT Hub protocol was the default MQTT protocol.
+
+#### Default configuration (--si 10 --ms 262144)
+
+```log
+==========================================================================
+OpcPublisher status @ 26.10.2017 15:33:05 (started @ 26.10.2017 15:31:09)
+
+OPC sessions: 1
+connected OPC sessions: 1
+connected OPC subscriptions: 5
+OPC monitored items: 500
+
+monitored items queue bounded capacity: 8192
+monitored items queue current items: 0
+monitored item notifications enqueued: 54363
+monitored item notifications enqueue failure: 0
+monitored item notifications dequeued: 54363
+
+messages sent to IoT Hub: 109
+last successful msg sent @: 26.10.2017 15:33:04
+bytes sent to IoT Hub: 12709429
+avg msg size: 116600
+msg send failures: 0
+messages too large to sent to IoT Hub: 0
+times we missed send interval: 0
+
+current working set in MB: 90
+--si setting: 10
+--ms setting: 262144
+--ih setting: Mqtt
+==========================================================================
+```
+
+The default configuration sends data to IoT Hub every 10 seconds, or when 256 kB of data is available for IoT Hub to ingest. This configuration adds a moderate latency of about 10 seconds, but has lowest probability of losing data because of the large message size. The diagnostics output shows there are no lost OPC node updates: `monitored item notifications enqueue failure: 0`.
+
+#### Constant send interval (--si 1 --ms 0)
+
+```log
+==========================================================================
+OpcPublisher status @ 26.10.2017 15:35:59 (started @ 26.10.2017 15:34:03)
+
+OPC sessions: 1
+connected OPC sessions: 1
+connected OPC subscriptions: 5
+OPC monitored items: 500
+
+monitored items queue bounded capacity: 8192
+monitored items queue current items: 0
+monitored item notifications enqueued: 54243
+monitored item notifications enqueue failure: 0
+monitored item notifications dequeued: 54243
+
+messages sent to IoT Hub: 109
+last successful msg sent @: 26.10.2017 15:35:59
+bytes sent to IoT Hub: 12683836
+avg msg size: 116365
+msg send failures: 0
+messages too large to sent to IoT Hub: 0
+times we missed send interval: 0
+
+current working set in MB: 90
+--si setting: 1
+--ms setting: 0
+--ih setting: Mqtt
+==========================================================================
+```
+
+When the message size is set to 0 then OPC Publisher internally batches data using the largest supported IoT Hub message size, which is 256 kB. The diagnostic output shows
+the average message size is 115,019 bytes. In this configuration OPC Publisher doesn't lose any OPC node value updates, and compared to the default it has lower latency.
+
+### Send each OPC node value update (--si 0 --ms 0)
+
+```log
+==========================================================================
+OpcPublisher status @ 26.10.2017 15:39:33 (started @ 26.10.2017 15:37:37)
+
+OPC sessions: 1
+connected OPC sessions: 1
+connected OPC subscriptions: 5
+OPC monitored items: 500
+
+monitored items queue bounded capacity: 8192
+monitored items queue current items: 8184
+monitored item notifications enqueued: 54232
+monitored item notifications enqueue failure: 44624
+monitored item notifications dequeued: 1424
+
+messages sent to IoT Hub: 1423
+last successful msg sent @: 26.10.2017 15:39:33
+bytes sent to IoT Hub: 333046
+avg msg size: 234
+msg send failures: 0
+messages too large to sent to IoT Hub: 0
+times we missed send interval: 0
+
+current working set in MB: 96
+--si setting: 0
+--ms setting: 0
+--ih setting: Mqtt
+==========================================================================
+```
+
+This configuration sends for each OPC node value change a message to IoT Hub. The diagnostics show the average message size is 234 bytes, which is small. The advantage of this configuration is that OPC Publisher doesn't add any latency. The number of
+lost OPC node value updates (`monitored item notifications enqueue failure: 44624`) is high, which make this configuration unsuitable for scenarios with high volumes of telemetry to be published.
+
+### Maximum batching (--si 0 --ms 262144)
+
+```log
+==========================================================================
+OpcPublisher status @ 26.10.2017 15:42:55 (started @ 26.10.2017 15:41:00)
+
+OPC sessions: 1
+connected OPC sessions: 1
+connected OPC subscriptions: 5
+OPC monitored items: 500
+
+monitored items queue bounded capacity: 8192
+monitored items queue current items: 0
+monitored item notifications enqueued: 54137
+monitored item notifications enqueue failure: 0
+monitored item notifications dequeued: 54137
+
+messages sent to IoT Hub: 48
+last successful msg sent @: 26.10.2017 15:42:55
+bytes sent to IoT Hub: 12565544
+avg msg size: 261782
+msg send failures: 0
+messages too large to sent to IoT Hub: 0
+times we missed send interval: 0
+
+current working set in MB: 90
+--si setting: 0
+--ms setting: 262144
+--ih setting: Mqtt
+==========================================================================
+```
+
+This configuration batches as many OPC node value updates as possible. The maximum IoT Hub message size is 256 kB, which is configured here. There's no send interval requested, which means the amount of data for IoT Hub to ingest determines the latency. This configuration has the least probability of losing any OPC node values and is suitable for publishing a high number of nodes. When you use this configuration, ensure your scenario doesn't have conditions where high latency is introduced if the message size of 256 kB isn't reached.
+
+## Debug the application
+
+To debug the application, open the **opcpublisher.sln** solution file with Visual Studio and use the Visual Studio debugging tools.
+
+If you need to access the OPC UA server in the OPC Publisher, make sure that your firewall allows access to the port the server listens on. The default port is: 62222.
+
+## Control the application remotely
+
+Configuring the nodes to publish can be done using IoT Hub direct methods.
+
+OPC Publisher implements a few additional IoT Hub direct method calls to read:
+
+- General information.
+- Diagnostic information on OPC sessions, subscriptions, and monitored items.
+- Diagnostic information on IoT Hub messages and events.
+- The startup log.
+- The last 100 lines of the log.
+- Shut down the application.
+
+The following GitHub repositories contain tools to [configure the nodes to publish](https://github.com/Azure-Samples/iot-edge-opc-publisher-nodeconfiguration) and [read the diagnostic information](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics). Both tools are also available as containers in Docker Hub.
+
+## Use a sample OPC UA server
+
+If you don't have a real OPC UA server, you can use the [sample OPC UA PLC](https://github.com/Azure-Samples/iot-edge-opc-plc) to get started. This sample PLC is also available on Docker Hub.
+
+It implements a number of tags, which generate random data and tags with anomalies. You can extend the sample if you need to simulate additional tag values.
+
+## Next steps
+
+Now that you've learned how to run OPC Publisher, the recommended next steps are to learn about [OPC Twin](overview-opc-twin.md) and [OPC Vault](overview-opc-vault.md).
iot-accelerators Howto Opc Twin Deploy Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-dependencies.md
+
+ Title: How to deploy OPC Twin cloud dependencies in Azure | Microsoft Docs
+description: This article describes how to deploy the OPC Twin Azure dependencies needed to do local development and debugging.
++ Last updated : 11/26/2018++++++
+# Deploying dependencies for local development
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article explains how to deploy only the Azure Platform Services needed to do local development and debugging. At the end, you will have a resource group deployed that contains everything you need for local development and debugging.
+
+## Deploy Azure platform services
+
+1. Make sure you have PowerShell and [AzureRM PowerShell](/powershell/azure/azurerm/install-azurerm-ps) extensions installed. Open a command prompt or terminal and run:
+
+ ```bash
+ git clone https://github.com/Azure/azure-iiot-components
+ cd azure-iiot-components
+ ```
+
+ ```bash
+ deploy -type local
+ ```
+
+2. Follow the prompts to assign a name to the resource group for your deployment. The script deploys only the dependencies to this resource group in your Azure subscription, but not the micro services. The script also registers an Application in Azure AD. This is needed to support OAUTH-based authentication. Deployment can take several minutes.
+
+3. Once the script completes, you can select to save the .env file. The .env environment file is the configuration file of all services and tools you want to run on your development machine.
+
+## Troubleshooting deployment failures
+
+### Resource group name
+
+Ensure you use a short and simple resource group name. The name is used also to name resources as such it must comply with resource naming requirements.
+
+### Azure Active Directory (AD) registration
+
+The deployment script tries to register Azure AD applications in Azure AD. Depending on your rights to the selected Azure AD tenant, this might fail. There are three options:
+
+1. If you chose a Azure AD tenant from a list of tenants, restart the script and choose a different one from the list.
+2. Alternatively, deploy a private Azure AD tenant, restart the script and select to use it.
+3. Continue without Authentication. Since you are running your micro services locally, this is acceptable, but does not mimic production environments.
+
+## Next steps
+
+Now that you have successfully deployed OPC Twin services to an existing project, here is the suggested next step:
+
+> [!div class="nextstepaction"]
+> [Learn about how to deploy OPC Twin modules](howto-opc-twin-deploy-modules.md)
iot-accelerators Howto Opc Twin Deploy Existing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-existing.md
+
+ Title: How to deploy an OPC Twin module to an existing Azure project | Microsoft Docs
+description: This article describes how to deploy OPC Twin to an existing project. You can also learn how to troubleshoot deployment failures.
++ Last updated : 11/26/2018++++++
+# Deploy OPC Twin to an existing project
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+The OPC Twin module runs on IoT Edge and provides several edge services to the OPC Twin and Registry services.
+
+The OPC Twin microservice facilitates the communication between factory operators and OPC UA server devices on the factory floor via an OPC Twin IoT Edge module. The microservice exposes OPC UA services (Browse, Read, Write, and Execute) via its REST API.
+
+The OPC UA device registry microservice provides access to registered OPC UA applications and their endpoints. Operators and administrators can register and unregister new OPC UA applications and browse the existing ones, including their endpoints. In addition to application and endpoint management, the registry service also catalogs registered OPC Twin IoT Edge modules. The service API gives you control of edge module functionality, for example, starting or stopping server discovery (scanning services), or activating new endpoint twins that can be accessed using the OPC Twin microservice.
+
+The core of the module is the Supervisor identity. The supervisor manages endpoint twin, which corresponds to OPC UA server endpoints that are activated using the corresponding OPC UA registry API. This endpoint twins translate OPC UA JSON received from the OPC Twin microservice running in the cloud into OPC UA binary messages, which are sent over a stateful secure channel to the managed endpoint. The supervisor also provides discovery services that send device discovery events to the OPC UA device onboarding service for processing, where these events result in updates to the OPC UA registry. This article shows you how to deploy the OPC Twin module to an existing project.
+
+> [!NOTE]
+> For more information on deployment details and instructions, see the GitHub [repository](https://github.com/Azure/azure-iiot-opc-twin-module).
+
+## Prerequisites
+
+Make sure you have PowerShell and [AzureRM PowerShell](/powershell/azure/azurerm/install-azurerm-ps) extensions installed. If you've not already done so, clone this GitHub repository. Run the following commands in PowerShell:
+
+```powershell
+git clone --recursive https://github.com/Azure/azure-iiot-components.git
+cd azure-iiot-components
+```
+
+## Deploy industrial IoT services to Azure
+
+1. In your PowerShell session, run:
+
+ ```powershell
+ set-executionpolicy -ExecutionPolicy Unrestricted -Scope Process
+ .\deploy.cmd
+ ```
+
+2. Follow the prompts to assign a name to the resource group of the deployment and a name to the website. The script deploys the microservices and their Azure platform dependencies into the resource group in your Azure subscription. The script also registers an Application in your Azure Active Directory (AAD) tenant to support OAUTH-based authentication. Deployment will take several minutes. An example of what you'd see once the solution is successfully deployed:
+
+ ![Industrial IoT OPC Twin deploy to existing project](media/howto-opc-twin-deploy-existing/opc-twin-deploy-existing1.png)
+
+ The output includes the URL of the public endpoint.
+
+3. Once the script completes successfully, select whether you want to save the `.env` file. You need the `.env` environment file if you want to connect to the cloud endpoint using tools such as the Console or deploy modules for development and debugging.
+
+## Troubleshooting deployment failures
+
+### Resource group name
+
+Ensure you use a short and simple resource group name. The name is used also to name resources as such it must comply with resource naming requirements.
+
+### Website name already in use
+
+It is possible that the name of the website is already in use. If you run into this error, you need to use a different application name.
+
+### Azure Active Directory (AAD) registration
+
+The deployment script tries to register two AAD applications in Azure Active Directory. Depending on your rights to the selected AAD tenant, the deployment might fail. There are two options:
+
+* If you chose a AAD tenant from a list of tenants, restart the script and choose a different one from the list.
+* Alternatively, deploy a private AAD tenant in another subscription, restart the script, and select to use it.
+
+> [!WARNING]
+> NEVER continue without Authentication. If you choose to do so, anyone can access your OPC Twin endpoints from the Internet unauthenticated. You can always choose the ["local" deployment option](howto-opc-twin-deploy-dependencies.md) to kick the tires.
+
+## Deploy an all-in-one industrial IoT services demo
+
+Instead of just the services and dependencies you can also deploy an all-in-one demo. The all in one demo contains three OPC UA servers, the OPC Twin module, all microservices, and a sample Web Application. It is intended for demonstration purposes.
+
+1. Make sure you have a clone of the repository (see above). Open a PowerShell prompt in the root of the repository and run:
+
+ ```powershell
+ set-executionpolicy -ExecutionPolicy Unrestricted -Scope Process
+ .\deploy -type demo
+ ```
+
+2. Follow the prompts to assign a new name to the resource group and a name to the website. Once deployed successfully, the script will display the URL of the web application endpoint.
+
+## Deployment script options
+
+The script takes the following parameters:
+
+```powershell
+-type
+```
+
+The type of deployment (vm, local, demo)
+
+```powershell
+-resourceGroupName
+```
+
+Can be the name of an existing or a new resource group.
+
+```powershell
+-subscriptionId
+```
+
+Optional, the subscription ID where resources will be deployed.
+
+```powershell
+-subscriptionName
+```
+
+Or the subscription name.
+
+```powershell
+-resourceGroupLocation
+```
+
+Optional, a resource group location. If specified, will try to create a new resource group in this location.
+
+```powershell
+-aadApplicationName
+```
+
+A name for the AAD application to register under.
+
+```powershell
+-tenantId
+```
+
+AAD tenant to use.
+
+```powershell
+-credentials
+```
+
+## Next steps
+
+Now that you've learned how to deploy OPC Twin to an existing project, here is the suggested next step:
+
+> [!div class="nextstepaction"]
+> [Secure communication of OPC UA Client and OPC UA PLC](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Twin Deploy Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-modules.md
+
+ Title: How to deploy OPC Twin module for Azure from scratch | Microsoft Docs
+description: This article describes how to deploy OPC Twin from scratch using the Azure portal's IoT Edge blade and also using AZ CLI.
++ Last updated : 11/26/2018+++++++
+# Deploy OPC Twin module and dependencies from scratch
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+The OPC Twin module runs on IoT Edge and provides several edge services to the OPC device twin and registry services.
+
+There are several options to deploy modules to your [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) Gateway, among them
+
+- [Deploying from Azure portal's IoT Edge blade](../iot-edge/how-to-deploy-modules-portal.md)
+- [Deploying using AZ CLI](../iot-edge/how-to-deploy-cli-at-scale.md)
+
+> [!NOTE]
+> For more information on deployment details and instructions, see the GitHub [repository](https://github.com/Azure/azure-iiot-components).
+
+## Deployment manifest
+
+All modules are deployed using a deployment manifest. An example manifest to deploy both [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher) and [OPC Twin](https://github.com/Azure/azure-iiot-opc-twin-module) is shown below.
+
+```json
+{
+ "content": {
+ "modulesContent": {
+ "$edgeAgent": {
+ "properties.desired": {
+ "schemaVersion": "1.0",
+ "runtime": {
+ "type": "docker",
+ "settings": {
+ "minDockerVersion": "v1.25",
+ "loggingOptions": "",
+ "registryCredentials": {}
+ }
+ },
+ "systemModules": {
+ "edgeAgent": {
+ "type": "docker",
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.0",
+ "createOptions": ""
+ }
+ },
+ "edgeHub": {
+ "type": "docker",
+ "status": "running",
+ "restartPolicy": "always",
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.0",
+ "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}], \"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
+ }
+ }
+ },
+ "modules": {
+ "opctwin": {
+ "version": "1.0",
+ "type": "docker",
+ "status": "running",
+ "restartPolicy": "always",
+ "settings": {
+ "image": "mcr.microsoft.com/iotedge/opc-twin:latest",
+ "createOptions": "{\"NetworkingConfig\": {\"EndpointsConfig\": {\"host\": {}}}, \"HostConfig\": {\"NetworkMode\": \"host\" }}"
+ }
+ },
+ "opcpublisher": {
+ "version": "2.0",
+ "type": "docker",
+ "status": "running",
+ "restartPolicy": "always",
+ "settings": {
+ "image": "mcr.microsoft.com/iotedge/opc-publisher:latest",
+ "createOptions": "{\"Hostname\":\"publisher\",\"Cmd\":[\"publisher\",\"--pf=./pn.json\",\"--di=60\",\"--tm\",\"--aa\",\"--si=0\",\"--ms=0\"],\"ExposedPorts\":{\"62222/tcp\":{}},\"NetworkingConfig\":{\"EndpointsConfig\":{\"host\":{}}},\"HostConfig\":{\"NetworkMode\":\"host\",\"PortBindings\":{\"62222/tcp\":[{\"HostPort\":\"62222\"}]}}}"
+ }
+ }
+ }
+ }
+ },
+ "$edgeHub": {
+ "properties.desired": {
+ "schemaVersion": "1.0",
+ "routes": {
+ "opctwinToIoTHub": "FROM /messages/modules/opctwin/* INTO $upstream",
+ "opcpublisherToIoTHub": "FROM /messages/modules/opcpublisher/* INTO $upstream"
+ },
+ "storeAndForwardConfiguration": {
+ "timeToLiveSecs": 7200
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+## Deploying from Azure portal
+
+The easiest way to deploy the modules to an Azure IoT Edge gateway device is through the Azure portal.
+
+### Prerequisites
+
+1. Deploy the OPC Twin [dependencies](howto-opc-twin-deploy-dependencies.md) and obtained the resulting `.env` file. Note the deployed `hub name` of the `PCS_IOTHUBREACT_HUB_NAME` variable in the resulting `.env` file.
+
+2. Register and start a [Linux](../iot-edge/how-to-install-iot-edge.md) or [Windows](../iot-edge/how-to-install-iot-edge.md) IoT Edge gateway and note its `device id`.
+
+### Deploy to an edge device
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT hub.
+
+2. Select **IoT Edge** from the left-hand menu.
+
+3. Click on the ID of the target device from the list of devices.
+
+4. Select **Set Modules**.
+
+5. In the **Deployment modules** section of the page, select **Add** and **IoT Edge Module.**
+
+6. In the **IoT Edge Custom Module** dialog use `opctwin` as name for the module, then specify the container *Image URI* as
+
+ ```bash
+ mcr.microsoft.com/iotedge/opc-twin:latest
+ ```
+
+ As *Container Create Options*, use the following JSON:
+
+ ```json
+ {"NetworkingConfig": {"EndpointsConfig": {"host": {}}}, "HostConfig": {"NetworkMode": "host" }}
+ ```
+
+ Fill out the optional fields if necessary. For more information about container create options, restart policy, and desired status see [EdgeAgent desired properties](../iot-edge/module-edgeagent-edgehub.md#edgeagent-desired-properties). For more information about the module twin see [Define or update desired properties](../iot-edge/module-composition.md#define-or-update-desired-properties).
+
+7. Select **Save** and repeat step **5**.
+
+8. In the IoT Edge Custom Module dialog, use `opcpublisher` as name for the module and the container *image URI* as
+
+ ```bash
+ mcr.microsoft.com/iotedge/opc-publisher:latest
+ ```
+
+ As *Container Create Options*, use the following JSON:
+
+ ```json
+ {"Hostname":"publisher","Cmd":["publisher","--pf=./pn.json","--di=60","--tm","--aa","--si=0","--ms=0"],"ExposedPorts":{"62222/tcp":{}},"HostConfig":{"PortBindings":{"62222/tcp":[{"HostPort":"62222"}] }}}
+ ```
+
+9. Select **Save** and then **Next** to continue to the routes section.
+
+10. In the routes tab, paste the following
+
+ ```json
+ {
+ "routes": {
+ "opctwinToIoTHub": "FROM /messages/modules/opctwin/* INTO $upstream",
+ "opcpublisherToIoTHub": "FROM /messages/modules/opcpublisher/* INTO $upstream"
+ }
+ }
+ ```
+
+ and select **Next**
+
+11. Review your deployment information and manifest. It should look like the above deployment manifest. Select **Submit**.
+
+12. Once you've deployed modules to your device, you can view all of them in the **Device details** page of the portal. This page displays the name of each deployed module, as well as useful information like the deployment status and exit code.
+
+## Deploying using Azure CLI
+
+### Prerequisites
+
+1. Install the latest version of the [Azure command line interface (AZ)](/cli/azure/) from [here](/cli/azure/install-azure-cli).
+
+### Quickstart
+
+1. Save the above deployment manifest into a `deployment.json` file.
+
+2. Use the following command to apply the configuration to an IoT Edge device:
+
+ ```azurecli
+ az iot edge set-modules --device-id [device id] --hub-name [hub name] --content ./deployment.json
+ ```
+
+ The `device id` parameter is case-sensitive. The content parameter points to the deployment manifest file that you saved.
+ ![az IoT Edge set-modules output](/azure/iot-edge/media/how-to-deploy-cli/set-modules.png)
+
+3. Once you've deployed modules to your device, you can view all of them with the following command:
+
+ ```azurecli
+ az iot hub module-identity list --device-id [device id] --hub-name [hub name]
+ ```
+
+ The device ID parameter is case-sensitive. ![az iot hub module-identity list output](/azure/iot-edge/media/how-to-deploy-cli/list-modules.png)
+
+## Next steps
+
+Now that you have learned how to deploy OPC Twin from scratch, here is the suggested next step:
+
+> [!div class="nextstepaction"]
+> [Deploy OPC Twin to an existing project](howto-opc-twin-deploy-existing.md)
iot-accelerators Howto Opc Vault Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-deploy.md
+
+ Title: How to deploy the OPC Vault certificate management service - Azure | Microsoft Docs
+description: How to deploy the OPC Vault certificate management service from scratch.
++ Last updated : 08/16/2019++++++
+# Build and deploy the OPC Vault certificate management service
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article explains how to deploy the OPC Vault certificate management service in Azure.
+
+> [!NOTE]
+> For more information, see the GitHub [OPC Vault repository](https://github.com/Azure/azure-iiot-opc-vault-service).
+
+## Prerequisites
+
+### Install required software
+
+Currently the build and deploy operation is limited to Windows.
+The samples are all written for C# .NET Standard, which you need to build the service and samples for deployment.
+All the tools you need for .NET Standard come with the .NET Core tools. See [Get started with .NET Core](/dotnet/articles/core/getting-started).
+
+1. [Install .NET Core 2.1+][dotnet-install].
+2. [Install Docker][docker-url] (optional, only if the local Docker build is required).
+4. Install the [Azure command-line tools for PowerShell][powershell-install].
+5. Sign up for an [Azure subscription][azure-free].
+
+### Clone the repository
+
+If you haven't done so yet, clone this GitHub repository. Open a command prompt or terminal, and run the following:
+
+```bash
+git clone https://github.com/Azure/azure-iiot-opc-vault-service
+cd azure-iiot-opc-vault-service
+```
+
+Alternatively, you can clone the repo directly in Visual Studio 2017.
+
+### Build and deploy the Azure service on Windows
+
+A PowerShell script provides an easy way to deploy the OPC Vault microservice and the application.
+
+1. Open a PowerShell window at the repo root.
+3. Go to the deploy folder `cd deploy`.
+3. Choose a name for `myResourceGroup` that's unlikely to cause a conflict with other deployed webpages. See the "Website name already in use" section later in this article.
+5. Start the deployment with `.\deploy.ps1` for interactive installation, or enter a full command line:
+`.\deploy.ps1 -subscriptionName "MySubscriptionName" -resourceGroupLocation "East US" -tenantId "myTenantId" -resourceGroupName "myResourceGroup"`
+7. If you plan to develop with this deployment, add `-development 1` to enable the Swagger UI, and to deploy debug builds.
+6. Follow the instructions in the script to sign in to your subscription, and to provide additional information.
+9. After a successful build and deploy operation, you should see the following message:
+ ```
+ To access the web client go to:
+ https://myResourceGroup.azurewebsites.net
+
+ To access the web service go to:
+ https://myResourceGroup-service.azurewebsites.net
+
+ To start the local docker GDS server:
+ .\myResourceGroup-dockergds.cmd
+
+ To start the local dotnet GDS server:
+ .\myResourceGroup-gds.cmd
+ ```
+
+ > [!NOTE]
+ > In case of problems, see the "Troubleshooting deployment failures" section later in the article.
+
+8. Open your favorite browser, and open the application page: `https://myResourceGroup.azurewebsites.net`
+8. Give the web app and the OPC Vault microservice a few minutes to warm up after deployment. The web home page might stop responding on first use, for up to a minute, until you get the first responses.
+11. To take a look at the Swagger API, open: `https://myResourceGroup-service.azurewebsites.net`
+13. To start a local GDS server with dotnet, start `.\myResourceGroup-gds.cmd`. With Docker, start `.\myResourceGroup-dockergds.cmd`.
+
+It's possible to redeploy a build with exactly the same settings. Be aware that such an operation renews all application secrets, and might reset some settings in the Azure Active Directory (Azure AD) application registrations.
+
+It's also possible to redeploy just the web app binaries. With the parameter `-onlyBuild 1`, new zip packages of the service and the app are deployed to the web applications.
+
+After successful deployment, you can start using the services. See [Manage the OPC Vault certificate management service](howto-opc-vault-manage.md).
+
+## Delete the services from the subscription
+
+Here's how:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Go to the resource group in which the service was deployed.
+3. Select **Delete resource group**, and confirm.
+4. After a short while, all deployed service components are deleted.
+5. Go to **Azure Active Directory** > **App registrations**.
+6. There should be three registrations listed for each deployed resource group. The registrations have the following names:
+`resourcegroup-client`, `resourcegroup-module`, `resourcegroup-service`. Delete each registration separately.
+
+Now all deployed components are removed.
+
+## Troubleshooting deployment failures
+
+### Resource group name
+
+Use a short and simple resource group name. The name is also used to name resources and the service URL prefix. As such, it must comply with resource naming requirements.
+
+### Website name already in use
+
+It's possible that the name of the website is already in use. You need to use a different resource group name. The hostnames in use by the deployment script are: https:\//resourcegroupname.azurewebsites.net and https:\//resourgroupname-service.azurewebsites.net.
+Other names of services are built by the combination of short name hashes, and are unlikely to conflict with other services.
+
+### Azure AD registration
+
+The deployment script tries to register three Azure AD applications in Azure AD. Depending on your permissions in the selected Azure AD tenant, this operation might fail. There are two options:
+
+- If you chose an Azure AD tenant from a list of tenants, restart the script and choose a different one from the list.
+- Alternatively, deploy a private Azure AD tenant in another subscription. Restart the script, and select to use it.
+
+## Deployment script options
+
+The script takes the following parameters:
++
+```
+-resourceGroupName
+```
+
+This can be the name of an existing or a new resource group.
+
+```
+-subscriptionId
+```
++
+This is the subscription ID where resources will be deployed. It's optional.
+
+```
+-subscriptionName
+```
++
+Alternatively, you can use the subscription name.
+
+```
+-resourceGroupLocation
+```
++
+This is a resource group location. If specified, this parameter tries to create a new resource group in this location. This parameter is also optional.
++
+```
+-tenantId
+```
++
+This is the Azure AD tenant to use.
+
+```
+-development 0|1
+```
+
+This is to deploy for development. Use debug build, and set the ASP.NET environment to Development. Create `.publishsettings` for import in Visual Studio 2017, to allow it to deploy the app and the service directly. This parameter is also optional.
+
+```
+-onlyBuild 0|1
+```
+
+This is to rebuild and to redeploy only the web apps, and to rebuild the Docker containers. This parameter is also optional.
+
+[azure-free]:https://azure.microsoft.com/free/
+[powershell-install]:https://azure.microsoft.com/downloads/#powershell
+[docker-url]: https://www.docker.com/
+[dotnet-install]: https://www.microsoft.com/net/learn/get-started
+
+## Next steps
+
+Now that you have learned how to deploy OPC Vault from scratch, you can:
+
+> [!div class="nextstepaction"]
+> [Manage OPC Vault](howto-opc-vault-manage.md)
iot-accelerators Howto Opc Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-manage.md
+
+ Title: How to manage the OPC Vault certificate service - Azure | Microsoft Docs
+description: Manage the OPC Vault root CA certificates and user permissions.
++ Last updated : 8/16/2019++++++
+# Manage the OPC Vault certificate service
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article explains the administrative tasks for the OPC Vault certificate management service in Azure. It includes information about how to renew Issuer CA certificates, how to renew the Certificate Revocation List (CRL), and how to grant and revoke user access.
+
+## Create or renew the root CA certificate
+
+After deploying OPC Vault, you must create the root CA certificate. Without a valid Issuer CA certificate, you can't sign or issue application certificates. Refer to [Certificates](howto-opc-vault-secure-ca.md#certificates) to manage your certificates with reasonable, secure lifetimes. Renew an Issuer CA certificate after half of its lifetime. When renewing, also consider that the configured lifetime of a newly-signed application certificate shouldn't exceed the lifetime of the Issuer CA certificate.
+> [!IMPORTANT]
+> The Administrator role is required to create or renew the Issuer CA certificate.
+
+1. Open your certificate service at `https://myResourceGroup-app.azurewebsites.net`, and sign in.
+2. Go to **Certificate Groups**.
+3. There is one default certificate group listed. Select **Edit**.
+4. In **Edit Certificate Group Details**, you can modify the subject name and lifetime of your CA and application certificates. The subject and the lifetimes should only be set once before the first CA certificate is issued. Lifetime changes during operations might result in inconsistent lifetimes of issued certificates and CRLs.
+5. Enter a valid subject (for example, `CN=My CA Root, O=MyCompany, OU=MyDepartment`).<br>
+ > [!IMPORTANT]
+ > If you change the subject, you must renew the Issuer certificate, or the service will fail to sign application certificates. The subject of the configuration is checked against the subject of the active Issuer certificate. If the subjects don't match, certificate signing is refused.
+6. Select **Save**.
+7. If you encounter a "forbidden" error at this point, your user credentials don't have the administrator permission to modify or create a new root certificate. By default, the user who deployed the service has administrator and signing roles with the service. Other users need to be added to the Approver, Writer or Administrator roles, as appropriate in the Azure Active Directory (Azure AD) application registration.
+8. Select **Details**. This should show the updated information.
+9. Select **Renew CA Certificate** to issue the first Issuer CA certificate, or to renew the Issuer certificate. Then select **OK**.
+10. After a few seconds, you'll see **Certificate Details**. To download the latest CA certificate and CRL for distribution to your OPC UA applications, select **Issuer** or **Crl**.
+
+Now the OPC UA certificate management service is ready to issue certificates for OPC UA applications.
+
+## Renew the CRL
+
+Renewal of the CRL is an update, which should be distributed to the applications at regular intervals. OPC UA devices, which support the CRL Distribution Point X509 extension, can directly update the CRL from the microservice endpoint. Other OPC UA devices might require manual updates, or can be updated by using GDS server push extensions (*) to update the trust lists with the certificates and CRLs.
+
+In the following workflow, all certificate requests in the deleted states are revoked in the CRLs, which correspond to the Issuer CA certificate for which they were issued. The version number of the CRL is incremented by 1. <br>
+> [!NOTE]
+> All issued CRLs are valid until the expiration of the Issuer CA certificate. This is because the OPC UA specification doesn't require a mandatory, deterministic distribution model for CRL.
+
+> [!IMPORTANT]
+> The Administrator role is required to renew the Issuer CRL.
+
+1. Open your certificate service at `https://myResourceGroup.azurewebsites.net`, and sign in.
+2. Go to the **Certificate Groups** page.
+3. Select **Details**. This should show the current certificate and CRL information.
+4. Select **Update CRL Revocation List (CRL)** to issue an updated CRL for all active Issuer certificates in the OPC Vault storage.
+5. After a few seconds, you'll see **Certificate Details**. To download the latest CA certificate and CRL for distribution to your OPC UA applications, select **Issuer** or **Crl**.
+
+## Manage user roles
+
+You manage user roles for the OPC Vault microservice in the Azure AD Enterprise Application. For a detailed description of the role definitions, see [Roles](howto-opc-vault-secure-ca.md#roles).
+
+By default, an authenticated user in the tenant can sign in the service as a Reader. Higher privileged roles require manual management in the Azure portal, or by using PowerShell.
+
+### Add user
+
+1. Open the Azure portal.
+2. Go to **Azure Active Directory** > **Enterprise applications**.
+3. Choose the registration of the OPC Vault microservice (by default, your `resourceGroupName-service`).
+4. Go to **Users and Groups**.
+5. Select **Add User**.
+6. Select or invite the user for assignment to a specific role.
+7. Select the role for the users.
+8. Select **Assign**.
+9. For users in the Administrator or Approver role, continue to add Azure Key Vault access policies.
+
+### Remove user
+
+1. Open the Azure portal.
+2. Go to **Azure Active Directory** > **Enterprise applications**.
+3. Choose the registration of the OPC Vault microservice (by default, your `resourceGroupName-service`).
+4. Go to **Users and Groups**.
+5. Select a user with a role to remove, and then select **Remove**.
+6. For removed users in the Administrator or Approver role, also remove them from Azure Key Vault policies.
+
+### Add user access policy to Azure Key Vault
+
+Additional access policies are required for Approvers and Administrators.
+
+By default, the service identity has only limited permissions to access Key Vault, to prevent elevated operations or changes to take place without user impersonation. The basic service permissions are Get and List, for both secrets and certificates. For secrets, there is only one exception: the service can delete a private key from the secret store after it's accepted by a user. All other operations require user impersonated permissions.
+
+#### For an Approver role, the following permissions must be added to Key Vault
+
+1. Open the Azure portal.
+2. Go to your OPC Vault `resourceGroupName`, used during deployment.
+3. Go to the Key Vault `resourceGroupName-xxxxx`.
+4. Go to **Access Policies**.
+5. Select **Add new**.
+6. Skip the template. There's no template that matches requirements.
+7. Choose **Select Principal**, and select the user to be added, or invite a new user to the tenant.
+8. Select the following **Key permissions**: **Get**, **List**, and **Sign**.
+9. Select the following **Secret permissions**: **Get**, **List**, **Set**, and **Delete**.
+10. Select the following **Certificate permissions**: **Get** and **List**.
+11. Select **OK**, and select **Save**.
+
+#### For an Administrator role, the following permissions must be added to Key Vault
+
+1. Open the Azure portal.
+2. Go to your OPC Vault `resourceGroupName`, used during deployment.
+3. Go to the Key Vault `resourceGroupName-xxxxx`.
+4. Go to **Access Policies**.
+5. Select **Add new**.
+6. Skip the template. There's no template that matches requirements.
+7. Choose **Select Principal**, and select the user to be added, or invite a new user to the tenant.
+8. Select the following **Key permissions**: **Get**, **List**, and **Sign**.
+9. Select the following **Secret permissions**: **Get**, **List**, **Set**, and **Delete**.
+10. Select the following **Certificate permissions**: **Get**, **List**, **Update**, **Create**, and **Import**.
+11. Select **OK**, and select **Save**.
+
+### Remove user access policy from Azure Key Vault
+
+1. Open the Azure portal.
+2. Go to your OPC Vault `resourceGroupName`, used during deployment.
+3. Go to the Key Vault `resourceGroupName-xxxxx`.
+4. Go to **Access Policies**.
+5. Find the user to remove, and select **Delete**.
+
+## Next steps
+
+Now that you have learned how to manage OPC Vault certificates and users, you can:
+
+> [!div class="nextstepaction"]
+> [Secure communication of OPC devices](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Vault Secure Ca https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-secure-ca.md
+
+ Title: How to run the OPC Vault certificate management service securely - Azure | Microsoft Docs
+description: Describes how to run the OPC Vault certificate management service securely in Azure, and reviews other security guidelines to consider.
++ Last updated : 8/16/2019++++++
+# Run the OPC Vault certificate management service securely
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article explains how to run the OPC Vault certificate management service securely in Azure, and reviews other security guidelines to consider.
+
+## Roles
+
+### Trusted and authorized roles
+
+The OPC Vault microservice allows for distinct roles to access various parts of the service.
+
+> [!IMPORTANT]
+> During deployment, the script only adds the user who runs the deployment script as a user for all roles. For a production deployment, you should review this role assignment, and reconfigure appropriately by following the guidelines below. This task requires manual assignment of roles and services in the Azure Active Directory (Azure AD) Enterprise Applications portal.
+
+### Certificate management service roles
+
+The OPC Vault microservice defines the following roles:
+
+- **Reader**: By default, any authenticated user in the tenant has read access.
+ - Read access to applications and certificate requests. Can list and query for applications and certificate requests. Also device discovery information and public certificates are accessible with read access.
+- **Writer**: The Writer role is assigned to a user to add write permissions for certain tasks.
+ - Read/Write access to applications and certificate requests. Can register, update, and unregister applications. Can create certificate requests and obtain approved private keys and certificates. Can also delete private keys.
+- **Approver**: The Approver role is assigned to a user to approve or reject certificate requests. The role doesn't include any other role.
+ - In addition to the Approver role to access the OPC Vault microservice API, the user must also have the key signing permission in Azure Key Vault to be able to sign the certificates.
+ - The Writer and Approver role should be assigned to different users.
+ - The main role of the Approver is the approval of the generation and rejection of certificate requests.
+- **Administrator**: The Administrator role is assigned to a user to manage the certificate groups. The role doesn't support the Approver role, but includes the Writer role.
+ - The administrator can manage the certificate groups, change the configuration, and revoke application certificates by issuing a new Certificate Revocation List (CRL).
+ - Ideally, the Writer, Approver, and Administrator roles are assigned to different users. For additional security, a user with the Approver or Administrator role also needs key-signing permission in Key Vault, to issue certificates or to renew an Issuer CA certificate.
+ - In addition to the microservice administration role, the role includes, but isn't limited to:
+ - Responsibility for administering the implementation of the CAΓÇÖs security practices.
+ - Management of the generation, revocation, and suspension of certificates.
+ - Cryptographic key life-cycle management (for example, the renewal of the Issuer CA keys).
+ - Installation, configuration, and maintenance of services that operate the CA.
+ - Day-to-day operation of the services.
+ - CA and database backup and recovery.
+
+### Other role assignments
+
+Also consider the following roles when you're running the service:
+
+- Business owner of the certificate procurement contract with the external root certification authority (for example, when the owner purchases certificates from an external CA or operates a CA that is subordinate to an external CA).
+- Development and validation of the Certificate Authority.
+- Review of audit records.
+- Personnel that help support the CA or manage the physical and cloud facilities, but aren't directly trusted to perform CA operations, are in the *authorized* role. The set of tasks persons in the authorized role is allowed to perform must also be documented.
+
+### Review memberships of trusted and authorized roles quarterly
+
+Review membership of trusted and authorized roles at least quarterly. Ensure that the set of people (for manual processes) or service identities (for automated processes) in each role is kept to a minimum.
+
+### Role separation between certificate requester and approver
+
+The certificate issuance process must enforce role separation between the certificate requester and certificate approver roles (persons or automated systems). Certificate issuance must be authorized by a certificate approver role that verifies that the certificate requestor
+is authorized to obtain certificates. The persons that hold the certificate approver role must be a formally authorized person.
+
+### Restrict assignment of privileged roles
+
+You should restrict assignment of privileged roles, such as authorizing membership of the Administrators and Approvers group, to a limited set of authorized personnel. Any privileged role changes must have access revoked within 24 hours. Finally, review privileged role assignments on a quarterly basis, and remove any unneeded or expired assignments.
+
+### Privileged roles should use two-factor authentication
+
+Use multi-factor authentication (also called two-factor authentication) for interactive sign-ins of Approvers and Administrators to the service.
+
+## Certificate service operation guidelines
+
+### Operational contacts
+
+The certificate service must have an up-to-date security response plan on file, which contains detailed operational incident response contacts.
+
+### Security updates
+
+All systems must be continuously monitored and updated with latest security updates.
+
+> [!IMPORTANT]
+> The GitHub repository of the OPC Vault service is continuously updated with security patches. Monitor these updates, and apply them to the service at regular intervals.
+
+### Security monitoring
+
+Subscribe to or implement appropriate security monitoring. For example, subscribe to a central monitoring solution (such as Azure Security Center or Microsoft 365 monitoring solution), and configure it appropriately to ensure that security events are transmitted to the monitoring solution.
+
+> [!IMPORTANT]
+> By default, the OPC Vault service is deployed with [Azure Application Insights](../azure-monitor/app/devops.md) as a monitoring solution. Adding a security solution like [Azure Security Center](https://azure.microsoft.com/services/security-center/) is highly recommended.
+
+### Assess the security of open-source software components
+
+All open-source components used within a product or service must be free of moderate or greater security vulnerabilities.
+
+> [!IMPORTANT]
+> During continuous integration builds, the GitHub repository of the OPC Vault service scans all components for vulnerabilities. Monitor these updates on GitHub, and apply them to the service at regular intervals.
+
+### Maintain an inventory
+
+Maintain an asset inventory for all production hosts (including persistent virtual machines), devices, all internal IP address ranges, VIPs, and public DNS domain names. Whenever you add or remove a system, device IP address, VIP, or public DNS domain, you must update the inventory within 30 days.
+
+#### Inventory of the default Azure OPC Vault microservice production deployment
+
+In Azure:
+- **App Service Plan**: App service plan for service hosts. Default S1.
+- **App Service** for microservice: The OPC Vault service host.
+- **App Service** for sample application: The OPC Vault sample application host.
+- **Key Vault Standard**: To store secrets and Azure Cosmos DB keys for the web services.
+- **Key Vault Premium**: To host the Issuer CA keys, for signing service, and for vault configuration and storage of application private keys.
+- **Azure Cosmos DB**: Database for application and certificate requests.
+- **Application Insights**: (optional) Monitoring solution for web service and application.
+- **Azure AD Application Registration**: A registration for the sample application, the service, and the edge module.
+
+For the cloud services, all hostnames, resource groups, resource names, subscription IDs, and tenant IDs used to deploy the service should be documented.
+
+In Azure IoT Edge or a local IoT Edge server:
+- **OPC Vault IoT Edge module**: To support a factory network OPC UA Global Discovery Server.
+
+For the IoT Edge devices, the hostnames and IP addresses should be documented.
+
+### Document the Certification Authorities (CAs)
+
+The CA hierarchy documentation must contain all operated CAs. This includes all related
+subordinate CAs, parent CAs, and root CAs, even when they aren't managed by the service.
+Instead of formal documentation, you can provide an exhaustive set of all non-expired CA certificates.
+
+> [!NOTE]
+> The OPC Vault sample application supports the download of all certificates used and produced in the service for documentation.
+
+### Document the issued certificates by all Certification Authorities (CAs)
+
+Provide an exhaustive set of all certificates issued in the past 12 months.
+
+> [!NOTE]
+> The OPC Vault sample application supports the download of all certificates used and produced in the service for documentation.
+
+### Document the standard operating procedure for securely deleting cryptographic keys
+
+During the lifetime of a CA, key deletion might happen only rarely. This is why no user has Key Vault Certificate Delete right assigned, and why there are no APIs exposed to delete an Issuer CA certificate. The manual standard operating procedure for securely deleting certification authority cryptographic keys is only available by directly accessing Key Vault in the Azure portal. You can also delete the certificate group in Key Vault. To ensure immediate deletion, disable the
+[Key Vault soft delete](../key-vault/general/soft-delete-overview.md) functionality.
+
+## Certificates
+
+### Certificates must comply with minimum certificate profile
+
+The OPC Vault service is an online CA that issues end entity certificates to subscribers. The OPC Vault microservice follows these guidelines in the default implementation.
+
+- All certificates must include the following X.509 fields, as specified below:
+ - The content of the version field must be v3.
+ - The contents of the serialNumber field must include at least 8 bytes of entropy obtained from a FIPS (Federal Information Processing Standards) 140 approved random number generator.<br>
+ > [!IMPORTANT]
+ > The OPC Vault serial number is by default 20 bytes, and is obtained from the operating system cryptographic random number generator. The random number generator is FIPS 140 approved on Windows devices, but not on Linux. Consider this when choosing a service deployment that uses Linux VMs or Linux docker containers, on which the underlying technology OpenSSL isn't FIPS 140 approved.
+ - The issuerUniqueID and subjectUniqueID fields must not be present.
+ - End-entity certificates must be identified with the basic constraints extension, in accordance with IETF RFC 5280.
+ - The pathLenConstraint field must be set to 0 for the Issuing CA certificate.
+ - The Extended Key Usage extension must be present, and must contain the minimum set of Extended Key Usage object identifiers (OIDs). The anyExtendedKeyUsage OID (2.5.29.37.0) must not be specified.
+ - The CRL Distribution Point (CDP) extension must be present in the Issuer CA certificate.<br>
+ > [!IMPORTANT]
+ > The CDP extension is present in OPC Vault CA certificates. Nevertheless, OPC UA devices use custom methods to distribute CRLs.
+ - The Authority Information Access extension must be present in the subscriber certificates.<br>
+ > [!IMPORTANT]
+ > The Authority Information Access extension is present in OPC Vault subscriber certificates. Nevertheless, OPC UA devices use custom methods to distribute Issuer CA information.
+- Approved asymmetric algorithms, key lengths, hash functions and padding modes must be used.
+ - RSA and SHA-2 are the only supported algorithms.
+ - RSA can be used for encryption, key exchange, and signature.
+ - RSA encryption must use only the OAEP, RSA-KEM, or RSA-PSS padding modes.
+ - Key lengths greater than or equal to 2048 bits are required.
+ - Use the SHA-2 family of hash algorithms (SHA256, SHA384, and SHA512).
+ - RSA Root CA keys with a typical lifetime greater than or equal to 20 years must be 4096 bits or greater.
+ - RSA Issuer CA keys must be at least 2048 bits. If the CA certificate expiration date is after 2030, the CA key must be 4096 bits or greater.
+- Certificate lifetime
+ - Root CA certificates: The maximum certificate validity period for root CAs must not exceed 25 years.
+ - Sub CA or online Issuer CA certificates: The maximum certificate validity period for CAs that are online and issue only subscriber certificates must not exceed 6 years. For these CAs, the related private signature key must not be used longer than 3 years to issue new certificates.<br>
+ > [!IMPORTANT]
+ > The Issuer certificate, as it is generated in the default OPC Vault microservice without external Root CA, is treated like an online Sub CA, with respective requirements and lifetimes. The default lifetime is set to 5 years, with a key length greater than or equal to 2048.
+ - All asymmetric keys must have a maximum 5-year lifetime, and a recommended 1-year lifetime.<br>
+ > [!IMPORTANT]
+ > By default, the lifetimes of application certificates issued with OPC Vault have a lifetime of 2 years, and should be replaced every year.
+ - Whenever a certificate is renewed, it's renewed with a new key.
+- OPC UA-specific extensions in application instance certificates
+ - The subjectAltName extension includes the application Uri and hostnames. These might also include FQDN, IPv4, and IPv6 addresses.
+ - The keyUsage includes digitalSignature, nonRepudiation, keyEncipherment, and dataEncipherment.
+ - The extendedKeyUsage includes serverAuth and clientAuth.
+ - The authorityKeyIdentifier is specified in signed certificates.
+
+### CA keys and certificates must meet minimum requirements
+
+- **Private keys**: RSA keys must be at least 2048 bits. If the CA certificate expiration date is after 2030, the CA key must be 4096 bits or greater.
+- **Lifetime**: The maximum certificate validity period for CAs that are online and issue only subscriber certificates must not exceed 6 years. For these CAs, the related private signature key must not be used longer than 3 years to issue new certificates.
+
+### CA keys are protected using Hardware Security Modules
+
+OpcVault uses Azure Key Vault Premium, and keys are protected by FIPS 140-2 Level 2 Hardware Security Modules (HSM).
+
+The cryptographic modules that Key Vault uses, whether HSM or software, are FIPS validated. Keys created or imported as HSM-protected are processed inside an HSM, validated to FIPS 140-2 Level 2. Keys created or imported as software-protected are processed inside cryptographic modules validated to FIPS 140-2 Level 1.
+
+## Operational practices
+
+### Document and maintain standard operational PKI practices for certificate enrollment
+
+Document and maintain standard operational procedures (SOPs) for how CAs issue certificates, including:
+- How the subscriber is identified and authenticated.
+- How the certificate request is processed and validated (if applicable, include also how certificate renewal and rekey requests are processed).
+- How issued certificates are distributed to the subscribers.
+
+The OPC Vault microservice SOP is described in [OPC Vault architecture](overview-opc-vault-architecture.md) and [Manage the OPC Vault certificate service](howto-opc-vault-manage.md). The practices follow "OPC Unified Architecture Specification Part 12: Discovery and Global Services."
++
+### Document and maintain standard operational PKI practices for certificate revocation
+
+The certificate revocation process is described in [OPC Vault architecture](overview-opc-vault-architecture.md) and [Manage the OPC Vault certificate service](howto-opc-vault-manage.md).
+
+### Document CA key generation ceremony
+
+The Issuer CA key generation in the OPC Vault microservice is simplified, due to the secure storage in Azure Key Vault. For more information, see [Manage the OPC Vault certificate service](howto-opc-vault-manage.md).
+
+However, when you're using an external Root certification authority, a CA key generation ceremony must adhere to the following requirements.
+
+The CA key generation ceremony must be performed against a documented script that includes at least the following items:
+- Definition of roles and participant responsibilities.
+- Approval for conduct of the CA key generation ceremony.
+- Cryptographic hardware and activation materials required for the ceremony.
+- Hardware preparation (including asset/configuration information update and sign-off).
+- Operating system installation.
+- Specific steps performed during the CA key generation ceremony, such as:
+ - CA application installation and configuration.
+ - CA key generation.
+ - CA key backup.
+ - CA certificate signing.
+ - Import of signed keys in the protected HSM of the service.
+ - CA system shutdown.
+ - Preparation of materials for storage.
++
+## Next steps
+
+Now that you have learned how to securely manage OPC Vault, you can:
+
+> [!div class="nextstepaction"]
+> [Secure OPC UA devices with OPC Vault](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Vault Secure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-secure.md
+
+ Title: Secure the communication of OPC UA devices with OPC Vault - Azure | Microsoft Docs
+description: How to register OPC UA applications, and how to issue signed application certificates for your OPC UA devices with OPC Vault.
++ Last updated : 8/16/2018++++++
+# Use the OPC Vault certificate management service
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article explains how to register applications, and how to issue signed application certificates for your OPC UA devices.
+
+## Prerequisites
+
+### Deploy the certificate management service
+
+First, deploy the service to the Azure cloud. For details, see [Deploy the OPC Vault certificate management service](howto-opc-vault-deploy.md).
+
+### Create the Issuer CA certificate
+
+If you haven't done so yet, create the Issuer CA certificate. For details, see [Create and manage the Issuer certificate for OPC Vault](howto-opc-vault-manage.md).
+
+## Secure OPC UA applications
+
+### Step 1: Register your OPC UA application
+
+> [!IMPORTANT]
+> The Writer role is required to register an application.
+
+1. Open your certificate service at `https://myResourceGroup-app.azurewebsites.net`, and sign in.
+2. Go to **Register New**. For an application registration, a user needs to have at least the Writer role assigned.
+2. The entry form follows naming conventions in OPC UA. For example, in the following screenshot, the settings for the [OPC UA Reference Server](https://github.com/OPCFoundation/UA-.NETStandard/tree/master/Applications/ReferenceServer) sample in the OPC UA .NET Standard stack is shown:
+
+ ![Screenshot of UA Reference Server Registration](media/howto-opc-vault-secure/reference-server-registration.png "UA Reference Server Registration")
+
+5. Select **Register** to register the application in the certificate service application database. The workflow directly guides the user to the next step to request a signed certificate for the application.
+
+### Step 2: Secure your application with a CA signed application certificate
+
+Secure your OPC UA application by issuing a signed certificate based on a Certificate Signing
+Request (CSR). Alternatively, you can request a new key pair, which includes a new private key in PFX or PEM format. For information about which method is supported for your application, see the documentation of your OPC UA device. In general, the CSR method is recommended, because it doesn't require a private key to be transferred over a wire.
+
+#### Request a new certificate with a new keypair
+
+1. Go to **Applications**.
+3. Select **New Request** for a listed application.
+
+ ![Screenshot of Request New Certificate](media/howto-opc-vault-secure/request-new-certificate.png "Request New Certificate")
+
+3. Select **Request new KeyPair and Certificate** to request a private key and a new signed certificate with the public key for your application.
+
+ ![Screenshot of Generate a New KeyPair and Certificate](media/howto-opc-vault-secure/generate-new-key-pair.png "Generate New Key Pair")
+
+4. Fill in the form with a subject and the domain names. For the private key, choose PEM or PFX with password. Select **Generate New KeyPair** to create the certificate request.
+
+ ![Screenshot that shows the View Certificate Request Details screen and the Generate New KeyPair button.](media/howto-opc-vault-secure/approve-reject.png "Approve Certificate")
+
+5. Approval requires a user with the Approver role, and with signing permissions in Azure Key Vault. In the typical workflow, the Approver and Requester roles should be assigned to different users. Select **Approve** or **Reject** to start or cancel the actual creation of the key pair and the signing operation. The new key pair is created and stored securely in Azure Key Vault, until downloaded by the certificate requester. The resulting certificate with public key is signed by the CA. These operations can take a few seconds to finish.
+
+ ![Screenshot of View Certificate Request Details, with approval message at bottom](media/howto-opc-vault-secure/view-key-pair.png "View Key Pair")
+
+7. The resulting private key (PFX or PEM) and certificate (DER) can be downloaded from here in the format selected as binary file download. A base64 encoded version is also available, for example, to copy and paste the certificate to a command line or text entry.
+8. After the private key is downloaded and stored securely, you can select **Delete Private Key**. The certificate with the public key remains available for future use.
+9. Due to the use of a CA signed certificate, the CA cert and Certificate Revocation List (CRL) should be downloaded here as well.
+
+Now it depends on the OPC UA device how to apply the new key pair. Typically, the CA cert and CRL are copied to a `trusted` folder, while the public and private keys of the application certificate are applied to an `own` folder in the certificate store. Some devices might already support server push for certificate updates. Refer to the documentation of your OPC UA device.
+
+#### Request a new certificate with a CSR
+
+1. Go to **Applications**.
+3. Select **New Request** for a listed application.
+
+ ![Screenshot of Request New Certificate](media/howto-opc-vault-secure/request-new-certificate.png "Request New Certificate")
+
+3. Select **Request new Certificate with Signing Request** to request a new signed certificate for your application.
+
+ ![Screenshot of Generate a new Certificate](media/howto-opc-vault-secure/generate-new-certificate.png "Generate New Certificate")
+
+4. Upload CSR by selecting a local file or by pasting a base64 encoded CSR in the form. Select **Generate New Certificate**.
+
+ ![Screenshot of View Certificate Request Details](media/howto-opc-vault-secure/approve-reject-csr.png "Approve CSR")
+
+5. Approval requires a user with the Approver role, and with signing permissions in Azure Key Vault. Select **Approve** or **Reject** to start or cancel the actual signing operation. The resulting certificate with public key is signed by the CA. This operation can take a few seconds to finish.
+
+ ![Screenshot that shows the View Certificate Request Details and includes an approval message at bottom.](media/howto-opc-vault-secure/view-cert-csr.png "View Certificate")
+
+6. The resulting certificate (DER) can be downloaded from here as binary file. A base64 encoded version is also available, for example, to copy and paste the certificate to a command line or text entry.
+10. After the certificate is downloaded and stored securely, you can select **Delete Certificate**.
+11. Due to the use of a CA signed certificate, the CA cert and CRL should be downloaded here as well.
+
+Now it depends on the OPC UA device how to apply the new certificate. Typically, the CA cert and CRL are copied to a `trusted` folder, while the application certificate is applied to an `own` folder in the certificate store. Some devices might already support server push for certificate updates. Refer to the documentation of your OPC UA device.
+
+### Step 3: Device secured
+
+The OPC UA device is now ready to communicate with other OPC UA devices secured by CA signed certificates, without further configuration.
+
+## Next steps
+
+Now that you have learned how to secure OPC UA devices, you can:
+
+> [!div class="nextstepaction"]
+> [Run a secure certificate management service](howto-opc-vault-secure-ca.md)
iot-accelerators Iot Accelerators Connected Factory Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-configure.md
+
+ Title: Configure the Connected Factory topology - Azure | Microsoft Docs
+description: This article describes how to configure the Connected Factory solution accelerator including its topology.
+++++ Last updated : 12/12/2017+++
+# Configure the Connected Factory solution accelerator
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+The Connected Factory solution accelerator shows a simulated dashboard for a fictional company Contoso. This company has factories in numerous global locations globally.
+
+This article uses Contoso as an example to describe how to configure the topology of a Connected Factory solution.
+
+## Simulated factories configuration
+
+Each Contoso factory has production lines that consist of three stations each. Each station is a real OPC UA server with a specific role:
+
+* Assembly station
+* Test station
+* Packaging station
+
+These OPC UA servers have OPC UA nodes and [OPC Publisher](overview-opc-publisher.md) sends the values of these nodes to Connected Factory. This includes:
+
+* Current operational status such as current power consumption.
+* Production information such as the number of products produced.
+
+You can use the dashboard to drill into the Contoso factory topology from a global view down to a station level view. The Connected Factory dashboard enables:
+
+* The visualization of OEE and KPI figures for each layer in the topology.
+* The visualization of current values of OPC UA nodes in the stations.
+* The aggregation of the OEE and KPI figures from the station level to the global level.
+* The visualization of alerts and actions to perform if values reach specific thresholds.
+
+## Connected Factory topology
+
+The topology of factories, production lines, and stations is hierarchical:
+
+* The global level has factory nodes as children.
+* The factories have production line nodes as children.
+* The production lines have station nodes as children.
+* The stations (OPC UA servers) have OPC UA nodes as children.
+
+Every node in the topology has a common set of properties that define:
+
+* A unique identifier for the topology node.
+* A name.
+* A description.
+* An image.
+* The children of the topology node.
+* Minimum, target, and maximum values for OEE and KPI figures and the alert actions to execute.
+
+## Topology configuration file
+
+To configure the properties listed in the previous section, the Connected Factory solution uses a configuration file called [ContosoTopologyDescription.json](https://github.com/Azure/azure-iot-connected-factory/blob/master/WebApp/Contoso/Topology/ContosoTopologyDescription.json).
+
+You can find this file in the solution source code in the `WebApp/Contoso/Topology` folder.
+
+The following snippet shows an outline of the `ContosoTopologyDescription.json` configuration file:
+
+```json
+{
+ <global_configuration>,
+ "Factories": [
+ <factory_configuration>,
+ "ProductionLines": [
+ <production_line_configuration>,
+ "Stations": [
+ <station_configuration>,
+ <more station_configurations>
+ ],
+ <more production_line_configurations>
+ ]
+ <more factory_configurations>
+ ]
+}
+```
+
+The common properties of `<global_configuration>`, `<factory_configuration>`, `<production_line_configuration>`, and `<station_configuration>` are:
+
+* **Name** (type string)
+
+ Defines a descriptive name, which should be only one word for the topology node to show in the dashboard.
+
+* **Description** (type string)
+
+ Describes the topology node in more detail.
+
+* **Image** (type string)
+
+ The path to an image in the WebApp solution to show when information about the topology node is shown in the dashboard.
+
+* **OeeOverall**, **OeePerformance**, **OeeAvailability**, **OeeQuality**, **Kpi1**, **Kpi2** (type `<performance_definition>`)
+
+ These properties define minimal, target, and maximal values of the operational figure used to generate alerts. These properties also define the actions to execute if an alert is detected.
+
+The `<factory_configuration>` and `<production_line_configuration>` items have a property:
+
+* **Guid** (type string)
+
+ Uniquely identifies the topology node.
+
+`<factory_configuration>` has a property:
+
+* **Location** (type `<location_definition>`)
+
+ Specifies where the factory is located.
+
+`<station_configuration>` has properties:
+
+* **OpcUri** (type string)
+
+ This property must be set to the OPC UA Application URI of the OPC UA server.
+ Because it must be globally unique by OPC UA specification, this property is used to identify the station topology node.
+
+* **OpcNodes**, which are an array of OPC UA nodes (type `<opc_node_description>`)
+
+`<location_definition>` has properties:
+
+* **City** (type string)
+
+ Name of city closest to the location
+
+* **Country** (type string)
+
+ Country of the location
+
+* **Latitude** (type double)
+
+ Latitude of the location
+
+* **Longitude** (type double)
+
+ Longitude of the location
+
+`<performance_definition>` has properties:
+
+* **Minimum** (type double)
+
+ Lower threshold the value can reach. If the current value is below this threshold, an alert is generated.
+
+* **Target** (type double)
+
+ Ideal target value.
+
+* **Maximum** (type double)
+
+ Upper threshold the value can reach. If the current value is above this threshold, an alert is generated.
+
+* **MinimumAlertActions** (type `<alert_action>`)
+
+ Defines the set of actions, which can be taken as response to a minimum alert.
+
+* **MaximumAlertActions** (type `<alert_action>`)
+
+ Defines the set of actions, which can be taken as response to a maximum alert.
+
+`<alert_action`> has properties:
+
+* **Type** (type string)
+
+ Type of the alert action. The following types are known:
+
+ * **AcknowledgeAlert**: the status of the alert should change to acknowledged.
+ * **CloseAlert**: all older alerts of the same type should no longer be shown in the dashboard.
+ * **CallOpcMethod**: an OPC UA method should be called.
+ * **OpenWebPage**: a browser window should be opened showing additional contextual information.
+
+* **Description** (type string)
+
+ Description of the action shown in the dashboard.
+
+* **Parameter** (type string)
+
+ Parameters required to execute the action. The value depends on the action type.
+
+ * **AcknowledgeAlert**: no parameter required.
+ * **CloseAlert**: no parameter required.
+ * **CallOpcMethod**: the node information and parameters of the OPC UA method to call in the format "NodeId of parent node, NodeId of method to call, URI of the OPC UA server."
+ * **OpenWebPage**: the URL to show in the browser window.
+
+`<opc_node_description>` contains information about OPC UA nodes in a station (OPC UA server). Nodes that represent no existing OPC UA nodes, but are used as storage in the computation logic of Connected Factory are also valid. It has the following properties:
+
+* **NodeId** (type string)
+
+ Address of the OPC UA node in the stationΓÇÖs (OPC UA serverΓÇÖs) address space. Syntax must be as specified in the OPC UA specification for a NodeId.
+
+* **SymbolicName** (type string)
+
+ Name to be shown in the dashboard when the value of this OPC UA node is shown.
+
+* **Relevance** (array of type string)
+
+ Indicates for which computation of OEE or KPI the OPC UA node value is relevant. Each array element can be one of the following values:
+
+ * **OeeAvailability_Running**: the value is relevant for calculation of OEE Availability.
+ * **OeeAvailability_Fault**: the value is relevant for calculation of OEE Availability.
+ * **OeePerformance_Ideal**: the value is relevant for calculation of OEE Performance and is typically a constant value.
+ * **OeePerformance_Actual**: the value is relevant for calculation of OEE Performance.
+ * **OeeQuality_Good**: the value is relevant for calculation of OEE Quality.
+ * **OeeQuality_Bad**: the value is relevant for calculation of OEE Quality.
+ * **Kpi1**: the value is relevant for calculation of KPI1.
+ * **Kpi2**: the value is relevant for calculation of KPI2.
+
+* **OpCode** (type string)
+
+ Indicates how the value of the OPC UA node is handled in Time Series Insight queries and OEE/KPI calculations. Each Time Series Insight query targets a specific timespan, which is a parameter of the query and delivers a result. The OpCode controls how the result is computed and can be one of the following values:
+
+ * **Diff**: difference between the last and the first value in the timespan.
+ * **Avg**: the average of all values in the timespan.
+ * **Sum**: the sum of all values in the timespan.
+ * **Last**: currently not used.
+ * **Count**: the number of values in the timespan.
+ * **Max**: the maximal value in the timespan.
+ * **Min**: the minimal value in the timespan.
+ * **Const**: the result is the value specified by property ConstValue.
+ * **SubMaxMin**: the difference between the maximal and the minimal value.
+ * **Timespan**: the timespan.
+
+* **Units** (type string)
+
+ Defines a unit of the value for display in the dashboard.
+
+* **Visible** (type boolean)
+
+ Controls if the value should be shown in the dashboard.
+
+* **ConstValue** (type double)
+
+ If the **OpCode** is **Const**, then this property is the value of the node.
+
+* **Minimum** (type double)
+
+ If the current value falls below this value, then a minimum alert is generated.
+
+* **Maximum** (type double)
+
+ If the current value raises above this value, then a maximum alert is generated.
+
+* **MinimumAlertActions** (type `<alert_action>`)
+
+ Defines the set of actions, which can be taken as response to a minimum alert.
+
+* **MaximumAlertActions** (type `<alert_action>`)
+
+ Defines the set of actions, which can be taken as response to a maximum alert.
+
+At the station level, you also see **Simulation** objects. These objects are only used to configure the Connected Factory simulation and should not be used to configure a real topology.
+
+## How the configuration data is used at runtime
+
+All the properties used in the configuration file can be grouped into different categories depending on how they are used. Those categories are:
+
+### Visual appearance
+
+Properties in this category define the visual appearance of the Connected Factory dashboard. Examples include:
+
+* Name
+* Description
+* Image
+* Location
+* Units
+* Visible
+
+### Internal topology tree addressing
+
+The WebApp maintains an internal data dictionary containing information of all topology nodes. The properties **Guid** and **OpcUri** are used as keys to access this dictionary and need to be unique.
+
+### OEE/KPI computation
+
+The OEE/KPI figures for the Connected Factory simulation are parameterized by:
+
+* The OPC UA node values to be included in the calculation.
+* How the figure is computed from the telemetry values.
+
+Connected Factory uses the OEE formulas as published by the [http://www.oeefoundation.org](http://www.oeefoundation.org).
+
+OPC UA node objects in stations enable tagging for usage in OEE/KPI calculation. The **Relevance** property indicates for which OEE/KPI figure the OPC UA node value should be used. The **OpCode** property defines how the value is included in the computation.
+
+### Alert handling
+
+Connected Factory supports a simple minimum/maximum threshold-based alert generation mechanism. There are a number of predefined actions you can configure in response to those alerts. The following properties control this mechanism:
+
+* Maximum
+* Minimum
+* MaximumAlertActions
+* MinimumAlertActions
+
+## Correlating to telemetry data
+
+For certain operations, such as visualizing the last value or creating Time Series Insight queries, the WebApp needs an addressing scheme for the ingested telemetry data. The telemetry sent to Connected Factory also needs to be stored in internal data structures. The two properties enabling these operations are at station (OPC UA server) and OPC UA node level:
+
+* **OpcUri**
+
+ Identifies (globally unique) the OPC UA server the telemetry comes from. In the ingested messages, this property is sent as **ApplicationUri**.
+
+* **NodeId**
+
+ Identifies the node value in the OPC UA server. The format of the property must be as specified in the OPC UA specification. In the ingested messages, this property is sent as **NodeId**.
+
+See [What is OPC Publisher](overview-opc-publisher.md) for more information on how the telemetry data is ingested to Connected Factory.
+
+## Example: How KPI1 is calculated
+
+The configuration in the `ContosoTopologyDescription.json` file controls how OEE/KPI figures are calculated. The following example shows how properties in this file control the computation of KPI1.
+
+In Connected Factory KPI1 is used to measure the number of successfully manufactured products in the last hour. Each station (OPC UA server) in the Connected Factory simulation provides an OPC UA node (`NodeId: "ns=2;i=385"`), which provides the telemetry to compute this KPI.
+
+The configuration for this OPC UA node looks like the following snippet:
+
+```json
+{
+ "NodeId": "ns=2;i=385",
+ "SymbolicName": "NumberOfManufacturedProducts",
+ "Relevance": [ "Kpi1", "OeeQuality_Good" ],
+ "OpCode": "SubMaxMin"
+},
+```
+
+This configuration enables querying of the telemetry values of this node using Time Series Insights. The Time Series Insights query retrieves:
+
+* The number of values.
+* The minimal value.
+* The maximal value.
+* The average of all values.
+* The sum of all values for all unique **OpcUri** (**ApplicationUri**), **NodeId** pairs in a given timespan.
+
+One characteristic of the **NumberOfManufactureredProducts** node value is that it only increases. To calculate the number of products manufactured in the timespan, Connected Factory uses the **OpCode** **SubMaxMin**. The calculation retrieves the minimum value at the start of the timespan and the maximum value at the end of the timespan.
+
+The **OpCode** in the configuration configures the computation logic to calculate the result of the difference of maximum and minimum value. Those results are then accumulated bottom up to the root (global) level and shown in the dashboard.
+
+## Next steps
+
+A suggested next step is to learn how to [Customize the Connected Factory solution](iot-accelerators-connected-factory-customize.md).
iot-accelerators Iot Accelerators Connected Factory Customize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-customize.md
+
+ Title: Customize the Connected Factory solution - Azure | Microsoft Docs
+description: A description of how to customize the behavior of the Connected Factory solution accelerator.
++++
+ms.devlang: csharp
+ Last updated : 12/14/2017+++
+# Customize how the Connected Factory solution displays data from your OPC UA servers
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+The Connected Factory solution aggregates and displays data from the OPC UA servers connected to the solution. You can browse and send commands to the OPC UA servers in your solution. For more information about OPC UA, see the [Connected Factory FAQ](iot-accelerators-faq-cf.md).
+
+Examples of aggregated data in the solution include the Overall Equipment Efficiency (OEE) and Key Performance Indicators (KPIs) that you can view in the dashboard at the factory, line, and station levels. The following screenshot shows the OEE and KPI values for the **Assembly** station, on **Production line 1**, in the **Munich** factory:
+
+![Example of OEE and KPI values in the solution][img-oee-kpi]
+
+The solution enables you to view detailed information from specific data items from the OPC UA servers, called *stations*. The following screenshot shows plots of the number of manufactured items from a specific station:
+
+![Plots of number of manufactured items][img-manufactured-items]
+
+If you click one of the graphs, you can explore the data further using Time Series Insights (TSI):
+
+![Explore data using Time Series Insights][img-tsi]
+
+This article describes:
+
+- How the data is made available to the various views in the solution.
+- How you can customize the way the solution displays the data.
+
+## Data sources
+
+The Connected Factory solution displays data from the OPC UA servers connected to the solution. The default installation includes several OPC UA servers running a factory simulation. You can add your own OPC UA servers that [connect through a gateway][lnk-connect-cf] to your solution.
+
+You can browse the data items that a connected OPC UA server can send to your solution in the dashboard:
+
+1. Choose **Browser** to navigate to the **Select an OPC UA server** view:
+
+ ![Navigate to the Select an OPC UA server view][img-select-server]
+
+1. Select a server and click **Connect**. Click **Proceed** when the security warning appears.
+
+ > [!NOTE]
+ > This warning only appears once for each server and establishes a trust relationship between the solution dashboard and the server.
+
+1. You can now browse the data items that the server can send to the solution. Items that are being sent to the solution have a check mark:
+
+ ![Published items][img-published]
+
+1. If you are an *Administrator* in the solution, you can choose to publish a data item to make it available in the Connected Factory solution. As an Administrator, you can also change the value of data items and call methods in the OPC UA server.
+
+## Map the data
+
+The Connected Factory solution maps and aggregates the published data items from the OPC UA server to the various views in the solution. The Connected Factory solution deploys to your Azure account when you provision the solution. A JSON file in the Visual Studio Connected Factory solution stores this mapping information. You can view and modify this JSON configuration file in the Connected Factory Visual Studio solution. You can redeploy the solution after you make a change.
+
+You can use the configuration file to:
+
+- Edit the existing simulated factories, production lines, and stations.
+- Map data from real OPC UA servers that you connect to the solution.
+
+For more information about mapping and aggregating the data to meet your specific requirements, see [How to configure the Connected Factory solution accelerator
+](iot-accelerators-connected-factory-configure.md).
+
+## Deploy the changes
+
+When you have finished making changes to the **ContosoTopologyDescription.json** file, you must redeploy the Connected Factory solution to your Azure account.
+
+The **azure-iot-connected-factory** repository includes a **build.ps1** PowerShell script you can use to rebuild and deploy the solution.
+
+## Next Steps
+
+Learn more about the Connected Factory solution accelerator by reading the following articles:
+
+* [Permissions on the azureiotsolutions.com site][lnk-permissions]
+* [Connected Factory FAQ](iot-accelerators-faq-cf.md)
+* [FAQ][lnk-faq]
++
+[img-oee-kpi]: ./media/iot-accelerators-connected-factory-customize/oeenadkpi.png
+[img-manufactured-items]: ./media/iot-accelerators-connected-factory-customize/manufactured.png
+[img-tsi]: ./media/iot-accelerators-connected-factory-customize/tsi.png
+[img-select-server]: ./media/iot-accelerators-connected-factory-customize/selectserver.png
+[img-published]: ./media/iot-accelerators-connected-factory-customize/published.png
++
+[lnk-permissions]: iot-accelerators-permissions.md
+[lnk-faq]: iot-accelerators-faq.md
iot-accelerators Iot Accelerators Connected Factory Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-dashboard.md
+
+ Title: Use the Connected Factory dashboard - Azure | Microsoft Docs
+description: This article describes how to use features of the Connected Factory dashboard to monitor and manage your industrial IoT devices.
+++++ Last updated : 07/10/2018++
+# Use features in the Connected Factory solution accelerator dashboard
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+The [Deploy a cloud-based solution to manage my industrial IoT devices](quickstart-connected-factory-deploy.md) quickstart showed you how to navigate the dashboard and respond to alarms. This how-to guide shows you some additional dashboard features you can use to monitor and manage your industrial IoT devices.
+
+## Apply filters
+
+You can filter the information displayed on the dashboard either in the **Factory Locations** panel or the **Alarms** panel:
+
+1. Click the **funnel** icon to display a list of available filters in either the factory locations panel or the alarms panel.
+
+1. The filters panel is displayed:
+
+ [![Connected Factory solution accelerator filters](./media/iot-accelerators-connected-factory-dashboard/filterpanel-inline.png)](./media/iot-accelerators-connected-factory-dashboard/filterpanel-expanded.png#lightbox)
+
+1. Choose the filter that you require and click **Apply**. It's also possible to type free text into the filter fields.
+
+1. The filter is then applied. The extra funnel icon indicates that a filter is applied:
+
+ [![Connected Factory solution accelerator filter applied](./media/iot-accelerators-connected-factory-dashboard/filterapplied-inline.png)](./media/iot-accelerators-connected-factory-dashboard/filterapplied-expanded.png#lightbox)
+
+ > [!NOTE]
+ > An active filter doesn't affect the displayed OEE and KPI values, it only filters the list contents.
+
+1. To clear a filter, click the funnel and click **Clear** in the filter panel.
+
+## Browse an OPC UA server
+
+When you deploy the solution accelerator, you automatically provision a set of simulated OPC UA servers that you can browse from the dashboard. Simulated servers make it easy for you to experiment with the solution accelerator without the need to deploy real servers.
+
+1. Click the **browser icon** in the dashboard navigation bar:
+
+ [![Connected Factory solution accelerator server browser](./media/iot-accelerators-connected-factory-dashboard/browser-inline.png)](./media/iot-accelerators-connected-factory-dashboard/browser-expanded.png#lightbox)
+
+1. Choose one of the servers from the list that shows the servers deployed for you in the solution accelerator:
+
+ [![Connected Factory solution accelerator server list](./media/iot-accelerators-connected-factory-dashboard/serverlist-inline.png)](./media/iot-accelerators-connected-factory-dashboard/serverlist-expanded.png#lightbox)
+
+1. Click **Connect**, a security dialog displays. For the simulation, it's safe to click **Proceed**.
+
+1. To expand any of the nodes in the server tree, click it. Nodes that are publishing telemetry have a check mark beside them:
+
+ [![Connected Factory solution accelerator server tree](./media/iot-accelerators-connected-factory-dashboard/servertree-inline.png)](./media/iot-accelerators-connected-factory-dashboard/servertree-expanded.png#lightbox)
+
+1. Right-click an item to read, write, publish, or call that node. The actions available to you depend on your permissions and the attributes of the node. The read option displays a context panel showing the value of the specific node. The write option displays a context panel where you can enter a new value. The call option displays a node where you can enter the parameters for the call.
+
+## Publish a node
+
+When you browse a *simulated OPC UA server*, you can also choose to publish new nodes. You can analyze the telemetry from these nodes in the solution. These *simulated OPC UA servers* make it easy to experiment with the solution accelerator without deploying real devices:
+
+1. Browse to a node in the OPC UA server browser tree that you wish to publish.
+
+1. Right-click the node. Click **Publish**:
+
+ [![Connected Factory solution accelerator publish node](./media/iot-accelerators-connected-factory-dashboard/publishnode-inline.png)](./media/iot-accelerators-connected-factory-dashboard/publishnode-expanded.png#lightbox)
+
+1. A context panel appears which tells you that the publish has succeeded. The node appears in the station level view with a check mark beside it:
+
+ [![Connected Factory solution accelerator publish success](./media/iot-accelerators-connected-factory-dashboard/publishsuccess-inline.png)](./media/iot-accelerators-connected-factory-dashboard/publishsuccess-expanded.png#lightbox)
+
+## Command and control
+
+The Connected Factory allows you command and control your industry devices directly from the cloud. You can use this feature to respond to alarms generated by the device. For example, you could send a command to the device to open a pressure release valve. You can find the available commands in the **StationCommands** node in the OPC UA servers browser tree. In this scenario, you open a pressure release valve on the assembly station of a production line in Munich. To use the command and control functionality, you must be in the **Administrator** role for the solution accelerator deployment:
+
+1. Browse to the **StationCommands** node in the OPC UA server browser tree for the Munich, production line 0, assembly station.
+
+1. Choose the command that you wish use. Right-click the **OpenPressureReleaseValve** node. Click **Call**:
+
+ [![Connected Factory solution accelerator call command](./media/iot-accelerators-connected-factory-dashboard/callcommand-inline.png)](./media/iot-accelerators-connected-factory-dashboard/callcommand-expanded.png#lightbox)
+
+1. A context panel appears informing you which method you're about to call and any parameter details. Click **Call**:
+
+ [![Connected Factory solution accelerator call parameters](./media/iot-accelerators-connected-factory-dashboard/callpanel-inline.png)](./media/iot-accelerators-connected-factory-dashboard/callpanel-expanded.png#lightbox)
+
+1. The context panel is updated to inform you that the method call succeeded. You can verify the call succeeded by reading the value of the pressure node that updated as a result of the call.
+
+ [![Connected Factory solution accelerator call success](./media/iot-accelerators-connected-factory-dashboard/callsuccess-inline.png)](./media/iot-accelerators-connected-factory-dashboard/callsuccess-expanded.png#lightbox)
+
+## Behind the scenes
+
+When you deploy a solution accelerator, the deployment process creates multiple resources in the Azure subscription you selected. You can view these resources in the Azure [portal](https://portal.azure.com). The deployment process creates a **resource group** with a name based on the name you choose for your solution accelerator:
+
+[![Connected Factory solution accelerator resource group](./media/iot-accelerators-connected-factory-dashboard/resourcegroup-inline.png)](./media/iot-accelerators-connected-factory-dashboard/resourcegroup-expanded.png#lightbox)
+
+You can view the settings of each resource by selecting it in the list of resources in the resource group.
+
+You can also view the source code for the solution accelerator in the [azure-iot-connected-factory](https://github.com/Azure/azure-iot-connected-factory) GitHub repository.
+
+When you're done, you can delete the solution accelerator from your Azure subscription on the [azureiotsolutions.com](https://www.azureiotsolutions.com/Accelerators#dashboard) site. This site enables you to easily delete all the resources that were provisioned when you created the solution accelerator.
+
+> [!NOTE]
+> To ensure that you delete everything related to the solution accelerator, delete it on the [azureiotsolutions.com](https://www.azureiotsolutions.com/Accelerators#dashboard) site. Do not delete the resource group in the portal.
+
+## Next steps
+
+Now that youΓÇÖve deployed a working solution accelerator, you can continue getting started with IoT solution accelerators by reading the following articles:
+
+* [Configure the Connected Factory solution accelerator](iot-accelerators-connected-factory-configure.md)
+* [Permissions on the azureiotsolutions.com site](iot-accelerators-permissions.md)
iot-accelerators Iot Accelerators Connected Factory Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-features.md
+
+ Title: Connected Factory solution features - Azure | Microsoft Docs
+description: This article describes an overview of the features of the Connected Factory preconfigured solution, such as cloud dashboard, rules, and alerts.
+++++ Last updated : 06/10/2019+++
+# What is Connected Factory IoT solution accelerator?
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+Connected Factory is an implementation of Microsoft's Azure Industrial IoT reference architecture, packaged as on open-source solution. You can use it as a starting point for a commercial product. You can deploy a pre-built version of the Connected Factory solution into your Azure subscription from [Azure IoT solution accelerators](https://www.azureiotsolutions.com/#solutions/types/CF).
+
+![Connected Factory solution dashboard](./media/iot-accelerators-connected-factory-features/dashboard.png)
+
+The Connected Factory solution accelerator [code is available on GitHub](https://github.com/Azure/azure-iot-connected-factory).
+
+Connected Factory includes the following features:
+
+## Industrial device interoperability
+
+- Connect to industrial assets with an OPC UA interface.
+- Use the simulated production lines (running OPC UA servers in Docker containers) to see live telemetry from them.
+- Browse the OPC UA information model of the OPC UA servers from a cloud dashboard.
+
+## Remote management
+
+- Configure your OPC UA assets from the cloud dashboard (call methods, read, and write data).
+- Publish and unpublish telemetry data from your OPC UA assets from a cloud dashboard.
+
+## Cloud dashboard
+
+- View telemetry previews directly in a cloud dashboard.
+- View trends in telemetry data and create correlations using the Time Series Insights Explorer dashboard.
+- See calculated Overall Equipment Efficiency (OEE) and Key Performance Indicators (KPIs) from a cloud dashboard.
+- View industrial asset hierarchies in a tree topology as well as on an interactive map.
+- View, acknowledge, and close alerts from a cloud dashboard.
+
+## Azure Time Series Insights
+
+- [Azure Time Series Insights](../time-series-insights/time-series-insights-overview.md) is built for storing, visualizing, and querying large amounts of time-series data. Connected Factory leverages this service.
+- Connected Factory integrates with this service enabling you perform deep, real-time analysis of your device data.
+
+## Rules and alerts
+
+[Configure threshold-based rules for alerts](iot-accelerators-connected-factory-configure.md).
+
+## End-to-end security
+
+- Configure security permissions for users using role-based access control (RBAC).
+- End-to-end encryption is implemented using OPC UA authentication (using X.509 certificates) as well as security tokens.
+
+## Customizability
+
+- Customize the solution to meet specific business requirements.
+- Full solution source-code available on GitHub. See the [Connected Factory preconfigured solution](https://github.com/Azure/azure-iot-connected-factory) repository.
+
+## Next steps
+
+To learn more about the Connected Factory solution accelerator, see the Quickstart [Try a cloud-based solution to manage my industrial IoT devices](quickstart-connected-factory-deploy.md).
iot-accelerators Iot Accelerators Faq Cf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-faq-cf.md
+
+ Title: Connected Factory solution FAQ - Azure | Microsoft Docs
+description: This article answers the frequently asked questions for the Connected Factory solution accelerator. It includes links to the GitHub repository.
+++++ Last updated : 12/12/2017+++
+# Frequently asked questions for Connected Factory solution accelerator
+
+See also, the general [FAQ](iot-accelerators-faq.md) for IoT solution accelerators.
+
+### Where can I find the source code for the solution accelerator?
+
+The source code is stored in the following GitHub repository:
+
+* [Connected Factory solution accelerator](https://github.com/Azure/azure-iot-connected-factory)
+
+### What is OPC UA?
+
+OPC Unified Architecture (UA), released in 2008, is a platform-independent, service-oriented interoperability standard. OPC UA is used by various industrial systems and devices such as industry PCs, PLCs, and sensors. OPC UA integrates the functionality of the OPC Classic specifications into one extensible framework with built-in security. It is a standard that is driven by the OPC Foundation. The [OPC Foundation](https://opcfoundation.org/) is a not-for-profit organization with more than 440 members. The goal of the organization is to use OPC specifications to facilitate multi-vendor, multi-platform, secure and reliable interoperability through:
+
+* Infrastructure
+* Specifications
+* Technology
+* Processes
+
+### Why did Microsoft choose OPC UA for the Connected Factory solution accelerator?
+
+Microsoft chose OPC UA because it is an open, non-proprietary, platform independent, industry-recognized, and proven standard. It is a requirement for Industrie 4.0 (RAMI4.0) reference architecture solutions ensuring interoperability between a broad set of manufacturing processes and equipment. Microsoft sees demand from its customers to build Industrie 4.0 solutions. Support for OPC UA helps lower the barrier for customers to achieve their goals and provides immediate business value to them.
+
+### How do I add a public IP address to the simulation VM?
+
+You have two options to add the IP address:
+
+* Use the PowerShell script `Simulation/Factory/Add-SimulationPublicIp.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory). Pass in your deployment name as a parameter. For a local deployment, use `<your username>ConnFactoryLocal`. The script prints out the IP address of the VM.
+
+* In the Azure portal, locate the resource group of your deployment. Except for a local deployment, the resource group has the name you specified as solution or deployment name. For a local deployment using the build script, the name of the resource group is `<your username>ConnFactoryLocal`. Now add a new **Public IP address** resource to the resource group.
+
+> [!NOTE]
+> In either case, ensure you install the latest patches by following the instructions on the [Ubuntu website](https://wiki.ubuntu.com/Security/Upgrades). Keep the installation up to date for as long as your VM is accessible through a public IP address.
+
+### How do I remove the public IP address to the simulation VM?
+
+You have two options to remove the IP address:
+
+* Use the PowerShell script Simulation/Factory/Remove-SimulationPublicIp.ps1 of the [repository](https://github.com/Azure/azure-iot-connected-factory). Pass in your deployment name as a parameter. For a local deployment, use `<your username>ConnFactoryLocal`. The script prints out the IP address of the VM.
+
+* In the Azure portal, locate the resource group of your deployment. Except for a local deployment, the resource group has the name you specified as solution or deployment name. For a local deployment using the build script, the name of the resource group is `<your username>ConnFactoryLocal`. Now remove the **Public IP address** resource from the resource group.
+
+### How do I sign in to the simulation VM?
+
+Signing in to the simulation VM is only supported if you have deployed your solution using the PowerShell script `build.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory).
+
+If you deployed the solution from www.azureiotsolutions.com, you cannot sign in to the VM. You cannot sign in, because the password is generated randomly and you cannot reset it.
+
+1. Add a public IP address to the VM. See [How do I add a public IP address to the simulation VM?](#how-do-i-remove-the-public-ip-address-to-the-simulation-vm)
+1. Create an SSH session to your VM using the IP address of the VM.
+1. The username to use is: `docker`.
+1. The password to use depends on the version you used to deploy:
+ * For solutions deployed using the build.ps1 script before 1 June 2017, the password is: `Passw0rd`.
+ * For solutions deployed using the build.ps1 script after 1 June 2017, you can find the password in the `<name of your deployment>.config.user` file. The password is stored in the **VmAdminPassword** setting. The password is generated randomly at deployment time unless you specify it using the `build.ps1` script parameter `-VmAdminPassword`
+
+### How do I stop and start all docker processes in the simulation VM?
+
+1. Sign in to the simulation VM. See [How do I sign in to the simulation VM?](#how-do-i-sign-in-to-the-simulation-vm)
+1. To check which containers are active, run: `docker ps`.
+1. To stop all simulation containers, run: `./stopsimulation`.
+1. To start all simulation containers:
+ * Export a shell variable with the name **IOTHUB_CONNECTIONSTRING**. Use the value of the **IotHubOwnerConnectionString** setting in the `<name of your deployment>.config.user` file. For example:
+
+ ```sh
+ export IOTHUB_CONNECTIONSTRING="HostName={yourdeployment}.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey={your key}"
+ ```
+
+ * Run `./startsimulation`.
+
+### How do I update the simulation in the VM?
+
+If you have made any changes to the simulation, you can use the PowerShell script `build.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory) using the `updatedimulation` command. This script builds all the simulation components, stops the simulation in the VM, uploads, installs, and starts them.
+
+### How do I find out the connection string of the IoT hub used by my solution?
+
+If you deployed your solution with the `build.ps1` script in the [repository](https://github.com/Azure/azure-iot-connected-factory), the connection string is the value of **IotHubOwnerConnectionString** in the `<name of your deployment>.config.user` file.
+
+You can also find the connection string using the Azure portal. In the IoT Hub resource in the resource group of your deployment, locate the connection string settings.
+
+### Which IoT Hub devices does the Connected Factory simulation use?
+
+The simulation self registers the following devices:
+
+* proxy.beijing.corp.contoso
+* proxy.capetown.corp.contoso
+* proxy.mumbai.corp.contoso
+* proxy.munich0.corp.contoso
+* proxy.rio.corp.contoso
+* proxy.seattle.corp.contoso
+* publisher.beijing.corp.contoso
+* publisher.capetown.corp.contoso
+* publisher.mumbai.corp.contoso
+* publisher.munich0.corp.contoso
+* publisher.rio.corp.contoso
+* publisher.seattle.corp.contoso
+
+Using the [DeviceExplorer](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/tools/) or [the IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) tool, you can check which devices are registered with the IoT hub your solution is using. To use device explorer, you need the connection string for the IoT hub in your deployment. To use the IoT extension for Azure CLI, you need your IoT Hub name.
+
+### How can I get log data from the simulation components?
+
+All components in the simulation log information in to log files. These files can be found in the VM in the folder `home/docker/Logs`. To retrieve the logs, you can use the PowerShell script `Simulation/Factory/Get-SimulationLogs.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory).
+
+This script needs to sign in to the VM. You may need to provide credentials for the sign-in. See [How do I sign in to the simulation VM?](#how-do-i-sign-in-to-the-simulation-vm) to find the credentials.
+
+The script adds/removes a public IP address to the VM, if it does not yet have one and removes it. The script puts all log files in an archive and downloads the archive to your development workstation.
+
+Alternatively log in to the VM via SSH and inspect the log files at runtime.
+
+### How can I check if the simulation is sending data to the cloud?
+
+With the [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer) or the [Azure IoT CLI Extension monitor-events](/cli/azure/ext/azure-iot/iot/hub#ext-azure-iot-az-iot-hub-monitor-events) command, you can inspect the data sent to IoT Hub from certain devices. To use these tools, you need to know the connection string for the IoT hub in your deployment. See [How do I find out the connection string of the IoT hub used by my solution?](#how-do-i-find-out-the-connection-string-of-the-iot-hub-used-by-my-solution)
+
+Inspect the data sent by one of the publisher devices:
+
+* publisher.beijing.corp.contoso
+* publisher.capetown.corp.contoso
+* publisher.mumbai.corp.contoso
+* publisher.munich0.corp.contoso
+* publisher.rio.corp.contoso
+* publisher.seattle.corp.contoso
+
+If you see no data sent to IoT Hub, then there is an issue with the simulation. As a first analysis step you should analyze the log files of the simulation components. See [How can I get log data from the simulation components?](#how-can-i-get-log-data-from-the-simulation-components) Next, try to stop and start the simulation and if there's still no data sent, update the simulation completely. See [How do I update the simulation in the VM?](#how-do-i-update-the-simulation-in-the-vm)
+
+### How do I enable an interactive map in my Connected Factory solution?
+
+To enable an interactive map in your Connected Factory solution, you must have an Azure Maps account.
+
+When deploying from [www.azureiotsolutions.com](https://www.azureiotsolutions.com), the deployment process adds an Azure Maps account to the resource group that contains the solution accelerator services.
+
+When you deploy using the `build.ps1` script in the Connected Factory GitHub repository set the environment variable `$env:MapApiQueryKey` in the build window to the [key of your Azure Maps account](../azure-maps/how-to-manage-account-keys.md). The interactive map is then enabled automatically.
+
+You can also add an Azure Maps account key to your solution accelerator after deployment. Navigate to the Azure portal and access the App Service resource in your Connected Factory deployment. Navigate to **Application settings**, where you find a section **Application settings**. Set the **MapApiQueryKey** to the [key of your Azure Maps account](../azure-maps/how-to-manage-account-keys.md). Save the settings and then navigate to **Overview** and restart the App Service.
+
+### How do I create an Azure Maps account?
+
+See, [How to manage your Azure Maps account and keys](../azure-maps/how-to-manage-account-keys.md).
+
+### How to obtain your Azure Maps account key
+
+See, [How to manage your Azure Maps account and keys](../azure-maps/how-to-manage-account-keys.md).
+
+### How do enable the interactive map while debugging locally?
+
+To enable the interactive map while you are debugging locally, set the value of the setting `MapApiQueryKey` in the files `local.user.config` and `<yourdeploymentname>.user.config` in the root of your deployment to the value of the **QueryKey** you copied previously.
+
+### How do I use a different image at the home page of my dashboard?
+
+To change the static image shown io the home page of the dashboard, replace the image `WebApp\Content\img\world.jpg`. Then rebuild and redeploy the WebApp.
+
+### How do I use non OPC UA devices with Connected Factory?
+
+To send telemetry data from non OPC UA devices to Connected Factory:
+
+1. [Configure a new station in the Connected Factory topology](iot-accelerators-connected-factory-configure.md) in the `ContosoTopologyDescription.json` file.
+
+1. Ingest the telemetry data in Connected Factory compatible JSON format:
+
+ ```json
+ [
+ {
+ "ApplicationUri": "<the_value_of_OpcUri_of_your_station",
+ "DisplayName": "<name_of_the_datapoint>",
+ "NodeId": "value_of_NodeId_of_your_datapoint_in_the_station",
+ "Value": {
+ "Value": <datapoint_value>,
+ "SourceTimestamp": "<timestamp>"
+ }
+ }
+ ]
+ ```
+
+1. The format of `<timestamp>` is: `2017-12-08T19:24:51.886753Z`
+
+1. Restart the Connected Factory App Service.
+
+### Next steps
+
+You can also explore some of the other features and capabilities of the IoT solution accelerators:
+
+* [Deploy Connected Factory solution accelerator](quickstart-connected-factory-deploy.md)
+* [IoT security from the ground up](../iot-fundamentals/iot-security-ground-up.md)
iot-accelerators Overview Iot Industrial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-iot-industrial.md
+
+ Title: Overview of Azure industrial IoT | Microsoft Docs
+description: This article provides an overview of industrial IoT. It explains the connected factory, factory floor connectivity and security components in IIoT.
++ Last updated : 11/26/2018++++++
+# What is industrial IoT (IIoT)
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+IIoT is the Industrial Internet of Things. IIoT enhances industrial efficiencies through the application of IoT in the manufacturing industry.
+
+## Improve industrial efficiencies
+
+Enhance your operational productivity and profitability with a connected factory solution accelerator. Connect and monitor your industrial equipment and devices in the cloudΓÇöincluding your machines already operating on the factory floor. Analyze your IoT data for insights that help you increase the performance of the entire factory floor.
+
+Reduce the time-consuming process of accessing factory floor machines with OPC Twin, and focus your time on building IIoT solutions. Streamline certificate management and industrial asset integration with OPC Vault, and feel confident that asset connectivity is secured. These microservices provide a REST-like API on top of [Azure Industrial IoT components](https://github.com/Azure/Industrial-IoT). The service API gives you control of edge module functionality.
+
+![Industrial IoT overview](media/overview-iot-industrial/overview.png)
+
+> [!NOTE]
+> For more information about
+Azure Industrial IoT services, see the GitHub [repository](https://github.com/Azure/Industrial-IoT) and [documentation](https://azure.github.io/Industrial-IoT/).
+> If you're unfamiliar with how Azure IoT Edge modules work, begin with the following articles:
+- [About Azure IoT Edge](../iot-edge/about-iot-edge.md)
+- [Azure IoT Edge modules](../iot-edge/iot-edge-modules.md)
+
+## Connected factory
+
+[Connected Factory](../iot-accelerators/iot-accelerators-connected-factory-features.md) is an implementation of Microsoft's Azure Industrial IoT reference architecture that can be customized to meet specific business requirements. The full solution code is open-source and available on Connected Factory solution accelerator GitHub repository. You can use it as a starting point for a commercial product, and deploy a pre-built solution into your Azure subscription in minutes.
+
+## Factory floor connectivity
+
+OPC Twin is an IIoT component that automates device discovery and registration, and offers remote control of industrial devices through REST APIs. OPC Twin, uses Azure IoT Edge and IoT Hub to connect the cloud and the factory network. OPC Twin allows IIoT developers to focus on building IIoT applications without worrying about how to securely access the on-premises machines.
+
+## Security
+
+OPC Vault is an implementation of OPC UA Global Discovery Server (GDS) that can configure, register, and manage certificate lifecycle for OPC UA server and client applications in the cloud. OPC Vault simplifies the implementation and maintenance of secure asset connectivity in the industrial space. By automating certificate management, OPC Vault frees factory operators from the manual and complex processes associated with connectivity and certificate management.
+
+## Next steps
+
+Now that you've had an introduction to industrial IoT and its components, here is the suggested next step:
+
+[What is OPC Twin](overview-opc-twin.md)
iot-accelerators Overview Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-publisher.md
+
+ Title: What is OPC Publisher - Azure | Microsoft Docs
+description: This article provides an overview of the features of OPC Publisher. It allows you to publish encoded JSON telemetry data using a JSON payload, to Azure IoT Hub.
++ Last updated : 06/10/2019+++++++
+# What is OPC Publisher?
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+OPC Publisher is a reference implementation that demonstrates how to:
+
+- Connect to existing OPC UA servers.
+- Publish JSON encoded telemetry data from OPC UA servers in OPC UA Pub/Sub format, using a JSON payload, to Azure IoT Hub.
+
+You can use any of the transport protocols that the Azure IoT Hub client SDK supports: HTTPS, AMQP, and MQTT.
+
+The reference implementation includes:
+
+- An OPC UA *client* for connecting to existing OPC UA servers you have on your network.
+- An OPC UA *server* on port 62222 that you can use to manage what's published and offers IoT Hub direct methods to do the same.
+
+You can download the [OPC Publisher reference implementation](https://github.com/Azure/iot-edge-opc-publisher) from GitHub.
+
+The application is implemented using .NET Core technology and can run on any platform supported by .NET Core.
+
+OPC Publisher implements retry logic to establish connections to endpoints that don't respond to a certain number of keep alive requests. For example, if an OPC UA server stops responding because of a power outage.
+
+For each distinct publishing interval to an OPC UA server, the application creates a separate subscription over which all nodes with this publishing interval are updated.
+
+OPC Publisher supports batching of the data sent to IoT Hub to reduce network load. This batching sends a packet to IoT Hub only if the configured packet size is reached.
+
+This application uses the OPC Foundation OPC UA reference stack as NuGet packages. See [https://opcfoundation.org/license/redistributables/1.3/](https://opcfoundation.org/license/redistributables/1.3/) for the licensing terms.
+
+## Next steps
+
+Now you've learned what OPC Publisher is, the suggested next step is to learn how to:
+
+[Configure OPC Publisher](howto-opc-publisher-configure.md)
iot-accelerators Overview Opc Twin Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-twin-architecture.md
+
+ Title: OPC Twin architecture - Azure | Microsoft Docs
+description: This article provides an overview of the OPC Twin architecture. It describes about the discovery, activation, browsing, and monitoring of the server.
++ Last updated : 11/26/2018++++++
+# OPC Twin architecture
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+The following diagrams illustrate the OPC Twin architecture.
+
+## Discover and activate
+
+1. The operator enables network scanning on the module or makes a one-time discovery using a discovery URL. The discovered endpoints and application information are sent via telemetry to the onboarding agent for processing. The OPC UA device onboarding agent processes OPC UA server discovery events sent by the OPC Twin IoT Edge module when in discovery or scan mode. The discovery events result in application registration and updates in the OPC UA device registry.
+
+ ![Diagram that shows the OPC Twin architecture with the OPC Twin IoT Edge module in discovery or scan mode.](media/overview-opc-twin-architecture/opc-twin1.png)
+
+1. The operator inspects the certificate of the discovered endpoint and activates the registered endpoint twin for access.ΓÇï
+
+ ![Diagram that shows the OPC Twin architecture with the IoT Edge "Twin identity".](media/overview-opc-twin-architecture/opc-twin2.png)
+
+## Browse and monitor
+
+1. Once activated, the operator can use the Twin service REST API to browse or inspect the server information model, read/write object variables and call methods. The user uses a simplified OPC UA API expressed fully in HTTP and JSON.
+
+ ![Diagram that shows the OPC Twin architecture setup for browsing and inspecting the server information model.](media/overview-opc-twin-architecture/opc-twin3.png)
+
+1. The twin service REST interface can also be used to create monitored items and subscriptions in the OPC Publisher. The OPC Publisher allows telemetry to be sent from OPC UA server systems to IoT Hub. For more information about OPC Publisher, see [What is OPC Publisher](overview-opc-publisher.md).
+
+ ![How OPC Twin works](media/overview-opc-twin-architecture/opc-twin4.png)
iot-accelerators Overview Opc Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-twin.md
+
+ Title: What is OPC Twin - Azure | Microsoft Docs
+description: This article provides an overview of OPC Twin. OPC Twin provides discovery, registration, and remote control of industrial devices through REST APIs.
++ Last updated : 11/26/2018++++++
+# What is OPC Twin?
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+OPC Twin consists of microservices that use Azure IoT Edge and IoT Hub to connect the cloud and the factory network. OPC Twin provides discovery, registration, and remote control of industrial devices through REST APIs. OPC Twin does not require an OPC Unified Architecture (OPC UA) SDK, is programming language agnostic, and can be included in a serverless workflow. This article describes several OPC Twin use cases.
+
+## Discovery and control
+You can use OPC Twin for simple for discovery and registration.
+
+### Simple discovery and registration
+OPC Twin allows factory operators to scan the factory network, so that OPC UA servers can be discovered and registered. As an alternative, factory operators can also manually register OPC UA devices using a known discovery URL. ΓÇïFor example, to connect to all the OPC UA devices after the IoT Edge gateway with an OPC Twin module has been installed on the factory floor, the factory operator can remotely trigger a scan of the network and visually see all the OPC UA servers. ΓÇï
+ΓÇï
+### Simple control
+OPC Twin allows factory operators to react to events and reconfigure their factory floor machines from the cloud either automatically or manually on the fly. OPC Twin provides REST APIs to invoke services on the OPC UA server, browse its address space as well as to read/write variables and execute methods.ΓÇï For example, a boiler uses temperature KPI to control the production line. The temperature sensor publishes the change in data using OPC Publisher. The factory operator receives the alert that the temperature has reached the threshold. The production line cools down automatically through OPC Twin. The factory operator is notified of the cool down.ΓÇï
+ΓÇï
+## Authentication
+You can use OPC Twin for simple authentication and for a simple developer experience.
+
+### Simple authentication
+OPC Twin uses Azure Active Directory (AAD)-based authentication and auditing from end to end. ΓÇïFor example, OPC Twin enables the application to be built on top of OPC Twin to determine what an operator has performed on a machine. On the machine side, it's through OPC UA auditing. On the cloud side, it's through storing an immutable client audit log and AAD authentication on the REST API.ΓÇï
+ΓÇï
+### Simple developer experience
+OPC Twin can be used with applications written in any programming language through REST APIs. As developers integrate an OPC UA client into a solution, knowledge of the OPC UA SDK is not necessary. OPC Twin can seamlessly integrate into stateless, serverless architectures. ΓÇïFor example, a full stack web developer who develops an application for an alarm and event dashboard can write the logic to respond to events in JavaScript or TypeScript using OPC Twin without the knowledge of C, C#, or the full OPC UA stack implementation. ΓÇï
+
+## Next steps
+
+Now that you have learned about OPC Twin and its uses, here is the suggested next step:
+
+[What is OPC Vault](overview-opc-vault.md)
iot-accelerators Overview Opc Vault Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-vault-architecture.md
+
+ Title: OPC Vault architecture - Azure | Microsoft Docs
+description: OPC Vault certificate management service architecture
++ Last updated : 08/16/2019++++++
+# OPC Vault architecture
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+This article gives an overview about the OPC Vault microservice and the OPC Vault IoT Edge module.
+
+OPC UA applications use application instance certificates to provide application level security. A secure connection is established by using asymmetric cryptography, for which the application certificates provide the public and private key pair. The certificates can be self-signed, or signed by a Certificate Authority (CA).
+
+An OPC UA application has a list of trusted certificates that represents the applications it trusts. These certificates can be self-signed or signed by a CA, or can be a Root-CA or a Sub-CA themselves. If a trusted certificate is part of a larger certificate chain, the application trusts all certificates that chain up to the certificate in the trust list. This is true as long as the full certificate chain can be validated.
+
+The major difference between trusting self-signed certificates and trusting a CA certificate
+is the installation effort required to deploy and maintain trust. There's also additional effort to host a company-specific CA.
+
+To distribute trust for self-signed certificates for multiple servers with a single client application, you must install all server application certificates on the client application trust list. Additionally, you must install the client application certificate on all server application trust lists. This administrative effort is quite a burden, and even increases when you have to consider certificate lifetimes and renew certificates.
+
+The use of a company-specific CA can greatly simplify the management of trust with
+multiple servers and clients. In this case, the administrator generates a CA signed
+application instance certificate once for every client and server used. In addition, the CA Certificate is installed in every application trust list, on all servers and clients. With this approach, only expired certificates need to be renewed and replaced for the affected applications.
+
+Azure Industrial IoT OPC UA certificate management service helps you manage a company-specific CA for OPC UA applications. This service is based on the OPC Vault microservice. OPC Vault provides a microservice to host a company-specific CA in a secure cloud. This solution is backed by services secured by Azure Active Directory (Azure AD), Azure Key Vault with Hardware Security Modules (HSMs), Azure Cosmos DB, and optionally IoT Hub as an application store.
+
+The OPC Vault microservice is designed to support role-based workflow, where security
+administrators and approvers with signing rights in Azure Key Vault approve or reject requests.
+
+For compatibility with existing OPC UA solutions, the services include
+support for an OPC Vault microservice backed edge module. This implements the
+**OPC UA Global Discovery Server and Certificate Management** interface, to distribute certificates and trust lists according to Part 12 of the specification.
++
+## Architecture
+
+The architecture is based on the OPC Vault microservice, with an OPC Vault
+IoT Edge module for the factory network and a web sample UX to control the workflow:
+
+![Diagram of OPC Vault architecture](media/overview-opc-vault-architecture/opc-vault.png)
+
+## OPC Vault microservice
+
+The OPC Vault microservice consists of the following interfaces to implement
+the workflow to distribute and manage a company-specific CA for OPC UA applications.
+
+### Application
+- An OPC UA application can be a server or a client, or both. OPC Vault serves in this
+case as an application registration authority.
+- In addition to the basic operations to register, update, and unregister applications, there are also interfaces to find and query for applications with search expressions.
+- The certificate requests must reference a valid application, in order to process a request and to issue a signed certificate with all OPC UA-specific extensions.
+- The application service is backed by a database in Azure Cosmos DB.
+
+### Certificate group
+- A certificate group is an entity that stores a root CA or a sub CA certificate, including the private key to sign certificates.
+- The RSA key length, the SHA-2 hash length, and the lifetimes are configurable for both Issuer CA and signed application certificates.
+- You store the CA certificates in Azure Key Vault, backed with FIPS 140-2 Level 2 HSM. The private key never leaves the secure storage, because signing is done by a Key Vault operation secured by Azure AD.
+- You can renew the CA certificates over time, and have them remain in safe storage due to Key Vault history.
+- The revocation list for each CA certificate is also stored in Key Vault as a secret. When an application is unregistered, the application certificate is also revoked in the Certificate Revocation List (CRL) by an administrator.
+- You can revoke single certificates, as well as batched certificates.
+
+### Certificate request
+A certificate request implements the workflow to generate a new key pair or a signed certificate, by using a Certificate Signing Request (CSR) for an OPC UA application.
+- The request is stored in a database with accompanying information, like the subject or a CSR, and a reference to the OPC UA application.
+- The business logic in the service validates the request against the information stored in the application database. For example, the application Uri in the database must match the application Uri in the CSR.
+- A security administrator with signing rights (that is, the Approver role) approves or rejects the request. If the request is approved, a new key pair or signed certificate (or both) are generated. The new private key is securely stored in Key Vault, and the new signed public certificate is stored in the certificate request database.
+- The requester can poll the request status until it is approved or revoked. If the request was approved, the private key and the certificate can be downloaded and installed in the certificate store of the OPC UA application.
+- The requestor can now accept the request to delete unnecessary information from the request database.
+
+Over the lifetime of a signed certificate, an application might be deleted or a key might become compromised. In such a case, a CA manager can:
+- Delete an application, which also deletes all pending and approved certificate requests of the app.
+- Delete just a single certificate request, if only a key is renewed or compromised.
+
+Now compromised approved and accepted certificate requests are marked as deleted.
+
+A manager can regularly renew the Issuer CA CRL. At the renewal time, all the deleted certificate requests are revoked, and the certificate serial numbers are added to the CRL revocation list. Revoked certificate requests are marked as revoked. In urgent events, single certificate requests can be revoked, too.
+
+Finally, the updated CRLs are available for distribution to the participating OPC UA clients and servers.
+
+## OPC Vault IoT Edge module
+To support a factory network Global Discovery Server, you can deploy the OPC Vault module on the edge. Run it as a local .NET Core application, or start it in a Docker container. Note that because of a lack of Auth2 authentication support in the current OPC UA .NET Standard stack, the functionality of the OPC Vault edge module is limited to a Reader role. A user can't be impersonated from the edge module to the microservice by using the OPC UA GDS standard interface.
+
+## Next steps
+
+Now that you have learned about the OPC Vault architecture, you can:
+
+> [!div class="nextstepaction"]
+> [Build and deploy OPC Vault](howto-opc-vault-deploy.md)
iot-accelerators Overview Opc Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-vault.md
+
+ Title: What is OPC Vault - Azure | Microsoft Docs
+description: This article provides an overview of OPC Vault. It can configure, register, and manage certificate lifecycle for OPC UA applications in the cloud.
++ Last updated : 11/26/2018++++++
+# What is OPC Vault?
+
+> [!IMPORTANT]
+> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
+
+OPC Vault is a microservice that can configure, register, and manage certificate lifecycle for OPC UA server and client applications in the cloud. This article describes the OPC Vault's simple use cases.
+
+## Certificate management
+ΓÇï
+For example, ΓÇïa manufacturing company needs to connect their OPC UA server machine to their newly built client application. When the manufacturer makes the initial access of the server machine, an error message is immediately shown on the OPC UA server application to indicate that the client application is not secure. This mechanism is built in the OPC UA server machine to prevent any unauthorized application access, which prevents vicious hacking on the shop floor.ΓÇï
+
+## Application security management
+A security professional uses OPC Vault microservice to easily enable OPC UA server to communicate with any client application, because OPC Vault has all the functions for certificate registry, storage, and lifecycle management. ΓÇïNow the OPC UA server is securely connected, it can communicate to the newly built client application
+
+## The complete OPC Vault architecture
+The following diagram illustrates the complete OPC Vault architecture.
+
+![OPC Vault architecture](media/overview-opc-vault-architecture/opc-vault.png)
+
+## Next steps
+
+Now that you have learned about OPC Vault and its uses, here is the suggested next step:
+
+[OPC Vault architecture](overview-opc-vault-architecture.md)
iot-accelerators Quickstart Connected Factory Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/quickstart-connected-factory-deploy.md
+
+ Title: Try a solution to manage my industrial IoT devices - Azure | Microsoft Docs
+description: In this quickstart, you deploy the Connected Factory Azure IoT solution accelerator, and sign in to and use the solution dashboard.
++++++ Last updated : 03/08/2019++
+# As an IT Pro, I want to try out a cloud-based solution to understand how I can monitor and manage my industrial IoT devices.
++
+# Quickstart: Try a cloud-based solution to manage my industrial IoT devices
+
+This quickstart shows you how to deploy the Azure IoT Connected Factory solution accelerator to run a cloud-based monitoring and management simulation for industrial IoT devices. When you deploy the Connected Factory solution accelerator, it's pre-populated with simulated resources that let you step through a common industrial IoT scenario. Several simulated factories are connected to the solution, they report the data values needed to compute overall equipment efficiency (OEE) and key performance indicators (KPIs). This quickstart shows you how to use the solution dashboard to:
+
+* Monitor factory, production lines, station OEE, and KPI values.
+* Analyze the telemetry data generated from these devices.
+* Respond to alarms.
+
+To complete this quickstart, you need an active Azure subscription.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Deploy the solution
+
+When you deploy the solution accelerator to your Azure subscription, you must set some configuration options.
+
+Navigate to [Microsoft Azure IoT solution accelerators](https://www.azureiotsolutions.com) and sign in using your Azure account credentials.
+
+Click the **Connected Factory** tile. On the **Connected Factory** page, click **Try Now**:
+
+![Try now](./media/quickstart-connected-factory-deploy/connectedfactory.png)
+
+On the **Create Connected Factory solution** page, enter a unique **Solution name** for your Connected Factory solution accelerator. This name is the name of the Azure resource group that contains all the solution accelerator resources. This quickstart uses the name **MyDemoConnectedFactory**.
+
+Select the **Subscription** and **Region** you want to use to deploy the solution accelerator. Typically, you choose the region closest to you. For this quickstart, we're using **Visual Studio Enterprise** and **East US**. You must be a [global administrator or user](iot-accelerators-permissions.md) in the subscription.
+
+Click **Create** to start your deployment. This process takes at least five minutes to run:
+
+![Connected Factory solution details](./media/quickstart-connected-factory-deploy/createform.png)
+
+## Sign in to the solution
+
+When the deployment to your Azure subscription is complete, you see a green checkmark and **Ready** on the solution tile. You can now sign in to your Connected Factory solution accelerator dashboard.
+
+On the **Provisioned solutions** page, click your new Connected Factory solution accelerator:
+
+![Choose new solution](./media/quickstart-connected-factory-deploy/choosenew.png)
+
+You can view information about your Connected Factory solution accelerator in the page that appears. Choose **Go to your Solution accelerator** to view your Connected Factory solution accelerator:
+
+![Solution panel](./media/quickstart-connected-factory-deploy/solutionpanel.png)
+
+Click **Accept** to accept the permissions request, the Connected Factory solution dashboard displays in your browser. It shows a set of simulated factories, production lines, and stations.
+
+## View the dashboard
+
+The default view is the *dashboard*. To navigate to other areas of the portal, use the menu on the left-hand side of the page:
+
+[![Solution dashboard](./media/quickstart-connected-factory-deploy/dashboard-inline.png)](./media/quickstart-connected-factory-deploy/dashboard-expanded.png#lightbox)
+
+You use the dashboard to manage your industrial IoT devices. Connected Factory uses a hierarchy to show a global factory configuration. The top level of the hierarchy is the enterprise that contains one or more factories. Each factory contains production lines, and each production line is made up of stations. At each level you can view OEE and KPIs, publish new nodes for telemetry, and respond to alarms.
+
+On the dashboard you can see:
+
+## Overall equipment efficiency
+
+The **Overall Equipment Efficiency** panel shows the OEE values for the whole enterprise, or the factory/production line/station you're viewing. This value is aggregated from the station view to the enterprise level. The OEE figure and its constituent elements can be further analyzed.
+
+[![Overall equipment efficiency](./media/quickstart-connected-factory-deploy/oee-inline.png)](./media/quickstart-connected-factory-deploy/oee-expanded.png#lightbox)
+
+OEE rates the efficiency of the manufacturing process using production-related operational parameters. OEE is an industry standard measure calculated by multiplying the availability rate, performance rate, and quality rate: OEE = availability x performance x quality.
+
+You can further analyze the OEE for any level in the hierarchy data. Click either the OEE, availability, performance, or quality percentage dial. A context panel appears with visualizations showing data over different timescales:
+
+[![Overall equipment efficiency detail](./media/quickstart-connected-factory-deploy/oeedetail-inline.png)](./media/quickstart-connected-factory-deploy/oeedetail-expanded.png#lightbox)
+
+You can click on a chart to do further analysis of the data.
+
+### Key performance indicators
+
+The **Key Performance Indicators** panel displays the number of units produced per hour and energy (kWh) used by the whole enterprise or by the factory/production line/station you're viewing. These values are aggregated from a station view to the enterprise level.
+
+[![Key performance indicators](./media/quickstart-connected-factory-deploy/kpis-inline.png)](./media/quickstart-connected-factory-deploy/kpis-expanded.png#lightbox)
+
+You can further analyze the KPIs for any level in the hierarchy data. Click either the OEE, availability, performance, or quality percentage dial. A context panel appears with visualizations showing data over different timescales:
+
+[![KPI detail](./media/quickstart-connected-factory-deploy/kpidetail-inline.png)](./media/quickstart-connected-factory-deploy/kpidetail-expanded.png#lightbox)
+
+You can click on a chart to do further analysis of the data.
+
+### Factory Locations
+
+A **Factory locations** panel that shows the status, location, and current production configuration in the solution. When you first run the solution accelerator, the dashboard shows a simulated set of factories. Each production line simulation is made up of three real OPC UA servers that run simulated tasks and share data. For more information about OPC UA, see the [Connected Factory FAQ](iot-accelerators-faq-cf.md):
+
+[![Factory locations](./media/quickstart-connected-factory-deploy/factorylocations-inline.png)](./media/quickstart-connected-factory-deploy/factorylocations-expanded.png#lightbox)
+
+You can navigate through the solution hierarchy and view OEE values and KPIs at each level:
+
+1. In **Factory Locations**, click **Mumbai**. You see the production lines at this location.
+
+1. Click **Production Line 1**. You see the stations on this production line.
+
+1. Click **Packaging**. You see the OPC UA nodes published by this station.
+
+1. Click **EnergyConsumption**. You see some charts plotting this value over different timescales. You can click on a chart to do further analysis of the data.
+
+[![View energy consumption](./media/quickstart-connected-factory-deploy/explorelocations-inline.png)](./media/quickstart-connected-factory-deploy/explorelocations-expanded.png#lightbox)
+
+### Map
+
+If your subscription has access to the [Bing Maps API](iot-accelerators-faq-cf.md), the *Factories* map shows you the geographical location and status of all the factories in the solution. To drill into the location details, click the locations displayed on the map.
+
+[![Map](./media/quickstart-connected-factory-deploy/map-inline.png)](./media/quickstart-connected-factory-deploy/map-expanded.png#lightbox)
+
+### Alarms
+
+The **Alarms** panel shows alarms generated when a reported value or a calculated OEE/KPI value goes over a threshold. This panel displays alarms at each level of the hierarchy, from the station level to the enterprise. Each alarm includes a description, date, time, location, and number of occurrences:
+
+[![Alarms](./media/quickstart-connected-factory-deploy/alarms-inline.png)](./media/quickstart-connected-factory-deploy/alarms-expanded.png#lightbox)
+
+You can analyze the data that caused the alarm from the dashboard. If you're an Administrator, you can take default actions on the alarms such as:
+
+* Close the alarm.
+* Acknowledge the alarm.
+
+Click one of the alarms, in the **Choose action** drop-down, choose **Acknowledge alert**, and then click **Apply**:
+
+[![Acknowledge alarm](./media/quickstart-connected-factory-deploy/acknowledge-inline.png)](./media/quickstart-connected-factory-deploy/acknowledge-expanded.png#lightbox)
+
+To further analyze the alarm data, click the graph in the alarm panel.
+
+These alarms are generated by rules that are specified in a configuration file in the solution accelerator. These rules can generate alarms when the OEE or KPI figures or OPC UA node values go over a threshold. You can set this threshold value.
+
+## Clean up resources
+
+If you plan to explore further, leave the Connected Factory solution accelerator deployed.
+
+If you no longer need the solution accelerator, delete it from the [Provisioned solutions](https://www.azureiotsolutions.com/Accelerators#dashboard) page by selecting it, and then clicking **Delete Solution**:
+
+![Delete solution](media/quickstart-connected-factory-deploy/deletesolution.png)
+
+## Next steps
+
+In this quickstart, you deployed the Connected Factory solution accelerator and learned how to navigate through your factories, production lines, and stations. You also saw how to view the OEE and KPI values at any level in the hierarchy, and how to respond to alarms.
+
+To learn how to use other features in the dashboard to manage your industrial IoT devices, continue to the following how-to guide:
+
+> [!div class="nextstepaction"]
+> [Use the Connected Factory dashboard](iot-accelerators-connected-factory-dashboard.md)
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
Last updated 03/17/2021
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
-[![Browse code](media/common/browse-github-code.png)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
+[![Browse code](media/common/browse-code-github.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
In this tutorial you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT. The article is part of the series [Get started with Azure IoT embedded device development](quickstart-device-development.md). The series introduces device developers to Azure RTOS, and shows how to connect several device evaluation kits to Azure IoT.
iot-edge How To Install Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge.md
Previously updated : 03/01/2021 Last updated : 03/26/2021
Check to see that the IoT Edge system service is running.
sudo iotedge system status ```
+A successful status response is `Ok`.
+ ::: moniker-end If you need to troubleshoot the service, retrieve the service logs.
Using curl commands, you can target the component files directly from the IoT Ed
2. Use the copied link in the following command to install that version of the identity service: ```bash
- curl -L <identity service link> -o aziot-identity-service.deb && sudo dpkg -i ./aziot-identity-service.deb
+ curl -L <identity service link> -o aziot-identity-service.deb && sudo apt-get install ./aziot-identity-service.deb
``` 3. Find the **aziot-edge** file that matches your IoT Edge device's architecture. Right-click on the file link and copy the link address.
Using curl commands, you can target the component files directly from the IoT Ed
4. Use the copied link in the following command to install that version of IoT Edge. ```bash
- curl -L <iotedge link> -o aziot-edge.deb && sudo dpkg -i ./aziot-edge.deb
+ curl -L <iotedge link> -o aziot-edge.deb && sudo apt-get install ./aziot-edge.deb
``` <!-- end 1.2 -->
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-raspberry-pi.md
device.
Device Update for Azure IoT Hub software is subject to the following license terms: * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE.md)
+ * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device update for IoT Hub agent.
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
There are two versions of the agent. If you're exercising image-based scenario,
``` Device Update for Azure IoT Hub software is subject to the following license terms: * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE.md)
+ * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device update for IoT Hub agent.
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
1. To begin, click the button below:
- [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.2.0-rc4%2FedgeDeploy.json)
+ [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fdevice-update-tutorial%2FedgeDeploy.json)
1. On the newly launched window, fill in the available form fields:
Read the license terms prior to using a package. Your installation and use of a
1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in Github and click the "Assets" drop-down.
-3. Download the `apt-update-import-samples.zip` by clicking on it.
+3. Download the `Edge.package.update.samples.zip` by clicking on it.
-5. Extract the contents of the folder to discover various update samples and their corresponding import manifests.
+5. Extract the contents of the folder to discover an update sample and its corresponding import manifests.
2. In Azure portal, select the Device Updates option under Automatic Device Management from the left-hand navigation bar in your IoT Hub.
Read the license terms prior to using a package. Your installation and use of a
4. Select "+ Import New Update".
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the `sample-package-update-1.0.1-importManifest.json` import manifest from the folder you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the `sample-1.0.1-libcurl4-doc-apt-manifest.json` apt manifest update file from the folder you downloaded previously.
-This update will install the latest available version of `libcurl4-doc package` to your device.
-
- Alternatively, you can select the `sample-package-update-2-2.0.1-importManifest.json` import manifest file and `sample-2.0.1-libcurl4-doc-7.58-apt-manifest.json` apt manifest update file from the folder you downloaded previously. This will install specific version v7.58.0 of the `libcurl4-doc package` to your device.
+5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the `sample-1.0.1-aziot-edge-importManifest.json` import manifest from the folder you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the `sample-1.0.1-aziot-edge-apt-manifest.json` apt manifest update file from the folder you downloaded previously.
+This update will update the `aziot-identity-service` and the `aziot-edge` packages to version 1.2.0~rc4-1 on your device.
:::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
This update will install the latest available version of `libcurl4-doc package`
You have now completed a successful end-to-end package update using Device Update for IoT Hub on a Ubuntu Server 18.04 x64 device.
-## Bonus steps
-
-1. Repeat the "Import update" and "Deploy update" sections
-
-3. During the "Import update" step, select the `sample-package-update-1.0.2-importManifest.json` import manifest file and `sample-1.0.2-libcurl4-doc-remove-apt-manifest.json` apt manifest update file from the folder you downloaded previously. This update will remove the installed `libcurl4-doc package` from your device.
- ## Clean up resources When no longer needed, clean up your device update account, instance, IoT Hub and the IoT Edge device (if you created the VM via the Deploy to Azure button). You can do so, by going to each individual resource and selecting "Delete". Note that you need to clean up a device update instance before cleaning up the device update account.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
IoT Hub batches messages and writes data to storage whenever the batch reaches a
You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there is no data to write.
-We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a [Microsoft-initiated failover](iot-hub-ha-dr.md#microsoft-initiated-failover) or IoT Hub [manual failover](iot-hub-ha-dr.md#manual-failover). You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path/list) for the list of files. Please see the following sample as guidance.
+We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a [Microsoft-initiated failover](iot-hub-ha-dr.md#microsoft-initiated-failover) or IoT Hub [manual failover](iot-hub-ha-dr.md#manual-failover). You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path) for the list of files. Please see the following sample as guidance.
```csharp public void ListBlobsInContainer(string containerName, string iothub)
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for
* [How to send device-to-cloud messages](quickstart-send-telemetry-node.md)
-* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-distributed-tracing.md
These instructions are for building the sample on Windows. For other environment
### Clone the source code and initialize
-1. Install ["Desktop development with C++" workload](/cpp/build/vscpp-step-0-installation?view=vs-2019) for Visual Studio 2019. Visual Studio 2017 and 2015 are also supported.
+1. Install ["Desktop development with C++" workload](/cpp/build/vscpp-step-0-installation?view=vs-2019&preserve-view=true) for Visual Studio 2019. Visual Studio 2017 and 2015 are also supported.
1. Install [CMake](https://cmake.org/). Make sure it is in your `PATH` by typing `cmake -version` from a command prompt.
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
|Exchange Online & SharePoint Online|Allow access to customer key for Azure Storage Service Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview).| |Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| |Azure App Service|[Deploy Azure Web App Certificate through Key Vault](https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html).|
-|Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](../../azure-sql/database/transparent-data-encryption-byok-overview.md?view=sql-server-2017&preserve-view=true&viewFallbackFrom=azuresqldb-current).|
+|Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](../../azure-sql/database/transparent-data-encryption-byok-overview.md).|
|Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).| |Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.| |Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)|
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/overview.md
In addition, Azure Key Vaults allow you to segregate application secrets. Applic
As a secure store in Azure, Key Vault has been used to simplify scenarios like: - [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md)-- The [always encrypted]( https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-database-engine) and [Transparent Data Encryption]( https://docs.microsoft.com/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver15) functionality in SQL server and Azure SQL Database-- [Azure App Service]( https://docs.microsoft.com/azure/app-service/configure-ssl-certificate).
+- The [always encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) and [Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) functionality in SQL server and Azure SQL Database
+- [Azure App Service](/azure/app-service/configure-ssl-certificate).
Key Vault itself can integrate with storage accounts, event hubs, and log analytics.
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-template.md
-# Quickstart: Create an Azure key vault and a key by using ARM template (Preview)
+# Quickstart: Create an Azure key vault and a key by using ARM template
[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, certificates, and other secrets. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a key vault and a key.
+> [!NOTE]
+> This feature is not available for Azure Government.
+ ## Prerequisites To complete this article:
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-sql-server.md
This article describes how to set up a lab for a basic SQL Server management and development class in Azure Lab Services. Database concepts are one of the introductory courses taught in most of the Computer Science departments in college. Structured Query Language (SQL) is an international standard. SQL is the standard language for relation database management including adding, accessing, and managing content in a database. It is most noted for its quick processing, proven reliability, ease, and flexibility of use.
-In this article, we'll show how to set up a virtual machine template in a lab with [Visual Studio 2019](https://visualstudio.microsoft.com/vs/), [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver15), and [Azure Data Studio](https://github.com/microsoft/azuredatastudio). For this lab, we will use one shared [SQL Server Database](../azure-sql/database/sql-database-paas-overview.md) for the entire lab. [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md) is Platform as a Service (PaaS) Database Engine offering from Azure.
+In this article, we'll show how to set up a virtual machine template in a lab with [Visual Studio 2019](https://visualstudio.microsoft.com/vs/), [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), and [Azure Data Studio](https://github.com/microsoft/azuredatastudio). For this lab, we will use one shared [SQL Server Database](../azure-sql/database/sql-database-paas-overview.md) for the entire lab. [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md) is Platform as a Service (PaaS) Database Engine offering from Azure.
## Lab configuration
Enable the settings described in the table below for the lab account. For more i
| Lab account setting | Instructions | | - | |
-| Marketplace image | Enable the ΓÇÿVisual Studio 2019 Community (latest release) on Windows 10 Enterprise N (x64)ΓÇÖ image for use within your lab account. |
+| Marketplace image | Enable the 'Visual Studio 2019 Community (latest release) on Windows 10 Enterprise N (x64)' image for use within your lab account. |
### Shared resource configuration
Now that the networking side of things is handled, lets create a SQL Server Data
9. Choose region for the **location**. If possible, enter the same location as the lab account and peered vnet to minimize latency. 10. Click **OK** to return to the **Create SQL Database** form. 11. Click **Configure database** link under the **Compute + storage** setting.
-12. Modify database settings as needed for the class. You can choose between Provisioned and Serverless options. For this example, we'll use the autoscaled Serverless option with max vCores of 4, min vCores of 1. WeΓÇÖll keep the autopause setting at the minimum of 1 hour. Click **Apply**.
+12. Modify database settings as needed for the class. You can choose between Provisioned and Serverless options. For this example, we'll use the autoscaled Serverless option with max vCores of 4, min vCores of 1. We'll keep the autopause setting at the minimum of 1 hour. Click **Apply**.
13. Click **Next: Networking** button. 14. On the Networking tab, choose Private endpoint for the **Connectivity method**. 15. Under the **Private endpoints** section, click **Add private endpoint**.
Now that the networking side of things is handled, lets create a SQL Server Data
19. Leave the Target subresource set to SqlServer. 20. For **Virtual network**, choose the same virtual network peered to the lab account. 21. For **Subnet**, choose subnet you want the endpoint hosted in. The IP assigned to the endpoint will be from the range assigned to that subnet.
-22. Set **Integrate with private DNS** to **No**. For simplicity, weΓÇÖll use AzureΓÇÖs DNS over own private DNS zone or our own DNS servers.
+22. Set **Integrate with private DNS** to **No**. For simplicity, we'll use Azure's DNS over own private DNS zone or our own DNS servers.
23. Click **OK**. 24. Click **Next: Additional settings**. 25. For the **Use existing data** setting, choose **Sample**. The data from the AdventureWorksLT database will be used when the database is created.
Now that our lab is created, let's modify the template machine with the software
## Visual Studio
-The image chosen above includes [Visual Studio 2019 Community](https://visualstudio.microsoft.com/vs/community/). All workloads and tool sets are already installed on the image. Use the Visual Studio Installer to [install any optional tools](/visualstudio/install/modify-visual-studio?view=vs-2019) you may want. [Sign in to Visual Studio](/visualstudio/ide/signing-in-to-visual-studio?view=vs-2019#how-to-sign-in-to-visual-studio) to unlock the community edition.
+The image chosen above includes [Visual Studio 2019 Community](https://visualstudio.microsoft.com/vs/community/). All workloads and tool sets are already installed on the image. Use the Visual Studio Installer to [install any optional tools](/visualstudio/install/modify-visual-studio?view=vs-2019&preserve-view=true) you may want. [Sign in to Visual Studio](/visualstudio/ide/signing-in-to-visual-studio?view=vs-2019&preserve-view=true#how-to-sign-in-to-visual-studio) to unlock the community edition.
-Visual Studio includes the **Data storage and processing** tool set, which includes SQL Server Data Tools (SSDT). For more information about SSDTΓÇÖs capabilities, see [SQL Server Data Tools overview](/sql/ssdt/sql-server-data-tools?view=sql-server-ver15). To verify connection to the shared SQL Server for the class will be successful, see [connect to a database and browse existing objects](/sql/ssdt/how-to-connect-to-a-database-and-browse-existing-objects?view=sql-server-ver15). If prompted add the template machine IP to the [list of allowed computers](../azure-sql/database/firewall-configure.md) that can connect to your SQL Server instance.
+Visual Studio includes the **Data storage and processing** tool set, which includes SQL Server Data Tools (SSDT). For more information about SSDT's capabilities, see [SQL Server Data Tools overview](/sql/ssdt/sql-server-data-tools). To verify connection to the shared SQL Server for the class will be successful, see [connect to a database and browse existing objects](/sql/ssdt/how-to-connect-to-a-database-and-browse-existing-objects). If prompted add the template machine IP to the [list of allowed computers](../azure-sql/database/firewall-configure.md) that can connect to your SQL Server instance.
Visual Studio supports several workloads including **Web & cloud** and **Desktop & mobile** workloads. Both of these workloads support SQL Server as a data source. For more information using ASP.NET Core to SQL Server, see [build an ASP.NET Core and SQL Database app in Azure App Service](../app-service/tutorial-dotnetcore-sqldb-app.md) tutorial. Use [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) library to connect to a SQL Database from a [Xamarin](/xamarin) app.
Visual Studio supports several workloads including **Web & cloud** and **Desktop
6. On the **Ready to Install**, click **Next**. 7. Wait for the installer to run. Click **Finish**.
-Now that we have Azure Data Studio installed, letΓÇÖs setup the connection to Azure SQL Database.
+Now that we have Azure Data Studio installed, let's setup the connection to Azure SQL Database.
1. On the **Welcome** page for Azure Data Studio, click the **New Connection** link. 2. In the **Connection Details** box, fill in necessary information.
Now that we have Azure Data Studio installed, letΓÇÖs setup the connection to Az
## Install SQL Server Management Studio
-[SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver15) is an integrated environment for managing any SQL infrastructure. SSMS is a tool used by database administrators to deploy, monitor, and upgrade data infrastructure.
+[SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) is an integrated environment for managing any SQL infrastructure. SSMS is a tool used by database administrators to deploy, monitor, and upgrade data infrastructure.
1. [Download Sql Server Management Studio](https://aka.ms/ssmsfullsetup). Once downloaded, start the installer. 2. On the **Welcome** page, click **Install**.
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/upgrade-basic-standard.md
There are two stages in an upgrade:
1. Change IP allocation method from Dynamic to Static. 2. Run the PowerShell script to complete the upgrade and traffic migration.
-> [!IMPORTANT]
-> The script is currently under maintenance. You can refer to instructions [here](../virtual-network/virtual-network-public-ip-address-upgrade.md) on how to upgrade Public IP addresses from Basic SKU and Standard SKU.
- ## Upgrade overview An Azure PowerShell script is available that does the following:
Yes. The Azure PowerShell script not only upgrades the Public IP address, copies
## Next steps
-[Learn about Standard Load Balancer](load-balancer-overview.md)
+[Learn about Standard Load Balancer](load-balancer-overview.md)
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-visual-studio.md
You can also [manage your logic apps in the Azure portal](manage-logic-apps-with
> When you install Visual Studio 2019 or 2017, make sure that you select the **Azure development** workload. > For more information, see [Manage resources associated with your Azure accounts in Visual Studio Cloud Explorer](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer).
- To install Cloud Explorer for Visual Studio 2015, [download Cloud Explorer from the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=MicrosoftCloudExplorer.CloudExplorerforVisualStudio2015). For more information, see [Manage resources associated with your Azure Accounts in Visual Studio Cloud Explorer (2015)](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer?view=vs-2015).
+ To install Cloud Explorer for Visual Studio 2015, [download Cloud Explorer from the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=MicrosoftCloudExplorer.CloudExplorerforVisualStudio2015). For more information, see [Manage resources associated with your Azure Accounts in Visual Studio Cloud Explorer (2015)](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer?view=vs-2015&preserve-view=true).
* [Azure SDK (2.9.1 or later)](https://azure.microsoft.com/downloads/)
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-automated-ml.md
With Azure Machine Learning, you can use automated ML to build a Python model an
See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#select-your-experiment-type).
-The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://github.com/Microsoft/onnxruntime/blob/master/docs/CSharp_API.md).
+The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://github.com/plaidml/onnxruntime/blob/plaidml/docs/CSharp_API.md).
## Next steps
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-machine-learning-pipelines.md
from azureml.core import Dataset
my_dataset = Dataset.File.from_files([(def_blob_store, 'train-images/')]) ```
-Intermediate data (or output of a step) is represented by an [OutputFileDatasetConfig](/python/api/azureml-pipeline-core/azureml.data.output_dataset_config.outputfiledatasetconfig) object. `output_data1` is produced as the output of a step. Optionally, this data can be registered as a dataset by calling `register_on_complete`. If you create an `OutputFileDatasetConfig` in one step and use it as an input to another step, that data dependency between steps creates an implicit execution order in the pipeline.
+Intermediate data (or output of a step) is represented by an [OutputFileDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig) object. `output_data1` is produced as the output of a step. Optionally, this data can be registered as a dataset by calling `register_on_complete`. If you create an `OutputFileDatasetConfig` in one step and use it as an input to another step, that data dependency between steps creates an implicit execution order in the pipeline.
`OutputFileDatasetConfig` objects return a directory, and by default writes output to the default datastore of the workspace.
pipeline1 = Pipeline(workspace=ws, steps=[compare_models])
### Use a dataset
-Datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL can be used as input to any pipeline step. You can write output to a [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep), [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep), or if you want to write data to a specific datastore use [OutputFileDatasetConfig](/python/api/azureml-pipeline-core/azureml.data.outputfiledatasetconfig).
+Datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL can be used as input to any pipeline step. You can write output to a [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep), [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep), or if you want to write data to a specific datastore use [OutputFileDatasetConfig](/python/api/azureml-core/azureml.data.outputfiledatasetconfig).
> [!IMPORTANT] > Writing output data back to a datastore using `OutputFileDatasetConfig` is only supported for Azure Blob, Azure File share, ADLS Gen 1 and Gen 2 datastores.
When you start a training run where the source directory is a local Git reposito
- Use [these Jupyter notebooks on GitHub](https://aka.ms/aml-pipeline-readme) to explore machine learning pipelines further - See the SDK reference help for the [azureml-pipelines-core](/python/api/azureml-pipeline-core/) package and the [azureml-pipelines-steps](/python/api/azureml-pipeline-steps/) package - See the [how-to](how-to-debug-pipelines.md) for tips on debugging and troubleshooting pipelines=-- Learn how to run notebooks by following the article [Use Jupyter notebooks to explore this service](samples-notebooks.md).
+- Learn how to run notebooks by following the article [Use Jupyter notebooks to explore this service](samples-notebooks.md).
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
To create and work with datasets, you need:
* Work on your own Jupyter notebook and install the SDK yourself with [these instructions](/python/api/overview/azure/ml/install). > [!NOTE]
-> Some dataset classes have dependencies on the [azureml-dataprep](/python/api/azureml-dataprep/) package, which is only compatible with 64-bit Python. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux (7, 8), Ubuntu (14.04, 16.04, 18.04), Fedora (27, 28), Debian (8, 9), and CentOS (7). If you are using unsupported distros, please follow [this guide](/dotnet/core/install/linux) to install .NET Core 2.1 to proceed.
+> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux (7, 8), Ubuntu (14.04, 16.04, 18.04), Fedora (27, 28), Debian (8, 9), and CentOS (7). If you are using unsupported distros, please follow [this guide](/dotnet/core/install/linux) to install .NET Core 2.1 to proceed.
## Compute size guidance
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
You can also link workspaces and attach a Synapse Spark pool with a single [Azur
* [Create an Azure Machine Learning workspace](how-to-manage-workspace.md?tabs=python).
-* [Create a Synapse workspace in Azure portal](/synapse-analytics/quickstart-create-workspace.md).
+* [Create a Synapse workspace in Azure portal](/azure/synapse-analytics/quickstart-create-workspace).
-* [Create Apache Spark pool using Azure portal, web tools or Synapse Studio](/synapse-analytics/quickstart-create-apache-spark-pool-portal.md)
+* [Create Apache Spark pool using Azure portal, web tools or Synapse Studio](/azure/synapse-analytics/quickstart-create-apache-spark-pool-studio)
* Install the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro)
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-fairness-aml.md
In this how-to guide, you will learn to use the [Fairlearn](https://fairlearn.gi
## Azure Machine Learning Fairness SDK The Azure Machine Learning Fairness SDK, `azureml-contrib-fairness`, integrates the open-source Python package, [Fairlearn](http://fairlearn.github.io),
-within Azure Machine Learning. To learn more about Fairlearn's integration within Azure Machine Learning, check out these [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/contrib/fairness). For more information on Fairlearn, see the [example guide](https://fairlearn.github.io/master/auto_examples/) and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).
+within Azure Machine Learning. To learn more about Fairlearn's integration within Azure Machine Learning, check out these [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/contrib/fairness). For more information on Fairlearn, see the [example guide](https://fairlearn.org/v0.6.0/auto_examples/) and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).
Use the following commands to install the `azureml-contrib-fairness` and `fairlearn` packages: ```bash
The following example shows how to use the fairness package. We will upload mode
1. If you registered your original model by following the previous steps, you can select **Models** in the left pane to view it. 1. Select a model, and then the **Fairness** tab to view the explanation visualization dashboard.
- To learn more about the visualization dashboard and what it contains, check out Fairlearn's [user guide](https://fairlearn.github.io/master/user_guide/assessment.html#fairlearn-dashboard).
+ To learn more about the visualization dashboard and what it contains, check out Fairlearn's [user guide](https://fairlearn.org/v0.6.0/user_guide/assessment.html#fairlearn-dashboard).
## Upload fairness insights for multiple models
To compare multiple models and see how their fairness assessments differ, you ca
## Upload unmitigated and mitigated fairness insights
-You can use Fairlearn's [mitigation algorithms](https://fairlearn.github.io/master/user_guide/mitigation.html), compare their generated mitigated model(s) to the original unmitigated model, and navigate the performance/fairness trade-offs among compared models.
+You can use Fairlearn's [mitigation algorithms](https://fairlearn.org/v0.6.0/user_guide/mitigation.html), compare their generated mitigated model(s) to the original unmitigated model, and navigate the performance/fairness trade-offs among compared models.
-To see an example that demonstrates the use of the [Grid Search](https://fairlearn.github.io/master/user_guide/mitigation.html#grid-search) mitigation algorithm (which creates a collection of mitigated models with different fairness and performance trade offs) check out this [sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/contrib/fairness/fairlearn-azureml-mitigation.ipynb).
+To see an example that demonstrates the use of the [Grid Search](https://fairlearn.org/v0.6.0/user_guide/mitigation.html#grid-search) mitigation algorithm (which creates a collection of mitigated models with different fairness and performance trade offs) check out this [sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/contrib/fairness/fairlearn-azureml-mitigation.ipynb).
Uploading multiple models' fairness insights in a single Run allows for comparison of models with respect to fairness and performance. You can click on any of the models displayed in the model comparison chart to see the detailed fairness insights of the particular model.
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-tensorboard.md
In this article, you learn how to view your experiment runs and metrics in TensorBoard using [the `tensorboard` package](/python/api/azureml-tensorboard/) in the main Azure Machine Learning SDK. Once you've inspected your experiment runs, you can better tune and retrain your machine learning models.
-[TensorBoard](https://www.tensorflow.org/tensorboard/r1/overview) is a suite of web applications for inspecting and understanding your experiment structure and performance.
+[TensorBoard](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard?view=azure-ml-py) is a suite of web applications for inspecting and understanding your experiment structure and performance.
How you launch TensorBoard with Azure Machine Learning experiments depends on the type of experiment: + If your experiment natively outputs log files that are consumable by TensorBoard, such as PyTorch, Chainer and TensorFlow experiments, then you can [launch TensorBoard directly](#launch-tensorboard) from experiment's run history.
tb.stop()
In this how-to you, created two experiments and learned how to launch TensorBoard against their run histories to identify areas for potential tuning and retraining. * If you are satisfied with your model, head over to our [How to deploy a model](how-to-deploy-and-where.md) article.
-* Learn more about [hyperparameter tuning](how-to-tune-hyperparameters.md).
+* Learn more about [hyperparameter tuning](how-to-tune-hyperparameters.md).
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-datasets.md
To create and train with datasets, you need:
* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install) (>= 1.13.0), which includes the `azureml-datasets` package. > [!Note]
-> Some Dataset classes have dependencies on the [azureml-dataprep](/python/api/azureml-dataprep/) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
+> Some Dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
## Consume datasets in machine learning training scripts
If you are using file share for other workloads, such as data transfer, the re
* [Train image classification models](https://aka.ms/filedataset-samplenotebook) with FileDatasets.
-* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
+* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-managed-identities.md
Once you've configured ACR without admin user as described earlier, you can acce
When creating workspace, you can specify a user-assigned managed identity that will be used to access the associated resources: ACR, KeyVault, Storage, and App Insights.
-First [create a user-assigned managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli]), and take note of the ARM resource ID of the managed identity.
+First [create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli), and take note of the ARM resource ID of the managed identity.
Then, use Azure CLI or Python SDK to create the workspace. When using the CLI, specify the ID using the `--primary-user-assigned-identity` parameter. When using the SDK, use `primary_user_assigned_identity`. The following are examples of using the Azure CLI and Python to create a new workspace using these parameters:
marketplace Analytics Make Your First Api Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics-make-your-first-api-call.md
Curl
## Next steps -- You can try out the APIs through the [Swagger API URL](https://partneranalytics-api.azure-api.net/analytics/cmp/swagger/https://docsupdatetracker.net/index.html)-- [Programmatic access paradigm](analytics-programmatic-access.md)
+- You can try out the APIs through the [Swagger API URL](https://swagger.io/docs/specification/api-host-and-base-path/)
+- [Programmatic access paradigm](analytics-programmatic-access.md)
media-services Content Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/content-protection-overview.md
If you get errors that end with `_NOT_SPECIFIED_IN_URL`, make sure that you spec
## Ask questions, give feedback, get updates Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.-
-## Next steps
-
-* [Protect with AES encryption](protect-with-aes128.md)
-* [Protect with DRM](protect-with-drm.md)
-* [Design multi-DRM content protection system with access control](design-multi-drm-system-with-access-control.md)
-* [Storage side encryption](storage-account-concept.md#storage-side-encryption)
-* [Frequently asked questions](frequently-asked-questions.md)
-* [JSON Web Token Handler](/dotnet/framework/security/json-web-token-handler)
media-services Design Multi Drm System With Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/design-multi-drm-system-with-access-control.md
The following table summarizes native DRM support on different platforms and EME
| **Windows 10** | PlayReady | Microsoft Edge/IE11 for PlayReady| | **Android devices (phone, tablet, TV)** |Widevine |Chrome for Widevine | | **iOS** | FairPlay | Safari for FairPlay (since iOS 11.2) |
-| **macOS** | FairPlay | Safari for FairPlay (since Safari 9+ on Mac OS X 10.11+ El Capitan)|
+| **macOS** | FairPlay | Safari for FairPlay (since Safari 9+ on macOS X 10.11+ El Capitan)|
| **tvOS** | FairPlay | | Considering the current state of deployment for each DRM, a service typically wants to implement two or three DRMs to make sure you address all the types of endpoints in the best way.
The following screenshot shows a scenario that uses an asymmetric key via an X50
![Custom STS with an asymmetric key](./media/design-multi-drm-system-with-access-control/media-services-running-sts2.png) In both of the previous cases, user authentication stays the same. It takes place through Azure AD. The only difference is that JWTs are issued by the custom STS instead of Azure AD. When you configure dynamic CENC protection, the license delivery service restriction specifies the type of JWT, either a symmetric or an asymmetric key.-
-## Next steps
-
-* [Frequently asked questions](frequently-asked-questions.md)
-* [Content protection overview](content-protection-overview.md)
-* [Protect your content with DRM](protect-with-drm.md)
media-services Live Events Outputs Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-events-outputs-concept.md
Once you have the stream flowing into the live event, you can begin the streamin
For detailed information about live outputs, see [Using a cloud DVR](live-event-cloud-dvr.md).
-## Frequently asked questions
+## Live event output questions
-See the [Frequently asked questions](frequently-asked-questions.md#live-streaming) article.
-
-## Ask questions and get updates
-
-Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.
-
-## Next steps
-
-[Live streaming tutorial](stream-live-tutorial-with-api.md)
+See the [live event output questions](questions-collection.md#live-streaming) article.
media-services Live Streaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-streaming-overview.md
Title: Overview of Live streaming description: This article gives an overview of live streaming using Azure Media Services v3. - Previously updated : 08/31/2020 Last updated : 03/25/2021 # Live streaming with Azure Media Services v3
The asset that the live output is archiving to, automatically becomes an on-dema
- [States and billing](live-event-states-billing.md) - [Latency](live-event-latency.md)
-## Frequently asked questions
+## Live streaming questions
-See the [Frequently asked questions](frequently-asked-questions.md#live-streaming) article.
-
-## Ask questions, give feedback, get updates
-
-Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.
-
-## Next steps
-
-* [Live streaming quickstart](live-events-wirecast-quickstart.md)
-* [Live streaming tutorial](stream-live-tutorial-with-api.md)
-* [Migration guidance for moving from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
+See the [live streaming questions](questions-collection.md#live-streaming) article.
media-services Migrate V 2 V 3 Migration Scenario Based Content Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-content-protection.md
Title: Content protection migration guidance
-description: This article is gives you content protection scenario based guidance that will assist you min migrating from Azure Media Services v2 to v3.
+description: This article gives your content protection scenario-based guidance that will assist you in your migrating from Azure Media Services v2 to v3.
Previously updated : 03/25/2021 Last updated : 03/26/2021
![migration steps 2](./media/migration-guide/steps-4.svg)
-This article is gives you content protection scenario based guidance that will assist you min migrating from Azure Media Services v2 to v3.
+This article provides you with details and guidance on the migration of content protection use cases from the v2 API to the new Azure Media Services v3 API.
## Protect content in v3 API
Use the support for [Multi-key](design-multi-drm-system-with-access-control.md)
See content protection concepts, tutorials and how to guides below for specific steps.
+## Visibility of v2 Assets, StreamingLocators, and properties in the v3 API for content protection scenarios
+
+During migration to the v3 API, you will find that you need to access some properties or content keys from your v2 Assets. One key difference is that the v2 API would use the **AssetId** as the primary identification key and the new v3 API uses the Azure Resource Management name of the entity as the primary identifier. The v2 **Asset.Name** property is not typically used as a unique identifier, so when migrating to v3 you will find that your v2 Asset names now appear in the **Asset.Description** field.
+
+For example, if you previously had a v2 Asset with the ID of **"nb:cid:UUID:8cb39104-122c-496e-9ac5-7f9e2c2547b8ΓÇ¥**, then you will find when listing the old v2 assets through the v3 API, the name will now be the GUID part at the end (in this case, **"8cb39104-122c-496e-9ac5-7f9e2c2547b8"**.)
+
+You can query the **StreamingLocators** associated with the Assets created in the v2 API using the new v3 method [ListStreamingLocators](https://docs.microsoft.com/rest/api/media/assets/liststreaminglocators) on the Asset entity. Also reference the .NET client SDK version of [ListStreamingLocatorsAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.assetsoperationsextensions.liststreaminglocatorsasync?view=azure-dotnet)
+
+The results of the **ListStreamingLocators** method will provide you the **Name** and **StreamingLocatorId** of the locator along with the **StreamingPolicyName**.
+
+To find the **ContentKeys** used in your **StreamingLocators** for content protection, you can call the [StreamingLocator.ListContentKeysAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.streaminglocatorsoperationsextensions.listcontentkeysasync?view=azure-dotnet) method.
+
+Any **Assets** that were created and published using the v2 API will have both a [Content Key Policy](https://docs.microsoft.com/azure/media-services/latest/content-key-policy-concept) and a Content Key defined on them in the v3 API, instead of using a default content key policy on the [Streaming Policy](https://docs.microsoft.com/azure/media-services/latest/streaming-policy-concept).
+
+For more information on content protection in the v3 API, see the article [Protect your content with Media Services dynamic encryption.](https://docs.microsoft.com/azure/media-services/latest/content-protection-overview)
+
+## How to list your v2 Assets and content protection settings using the v3 API
+
+In the v2 API, you would commonly use **Assets**, **StreamingLocators**, and **ContentKeys** to protect your streaming content.
+When migrating to the v3 API, your v2 API Assets, StreamingLocators, and ContentKeys are all exposed automatically in the v3 API and all of the data on them is available for you to access.
+
+## Can I update v2 properties using the v3 API?
+
+No, you cannot update any properties on v2 entities through the v3 API that were created using StreamingLocators, StreamingPolicies, Content Key Policies, and Content Keys in v2.
+If you need to update, change or alter content stored on v2 entities, you will need to update it via the v2 API or create new v3 API entities to migrate them forward.
+
+## How do I change the ContentKeyPolicy used for a v2 Asset that is published and keep the same content key?
+
+In this situation, you should first unpublish (remove all Streaming Locators) on the Asset via the v2 SDK (delete the locator, unlink the Content Key Authorization Policy, unlink the Asset Delivery Policy, unlink the Content Key, delete the Content Key) then create a new **[StreamingLocator](https://docs.microsoft.com/azure/media-services/latest/streaming-locators-concept)** in v3 using a v3 [StreamingPolicy](https://docs.microsoft.com/azure/media-services/latest/streaming-policy-concept) and [ContentKeyPolicy](https://docs.microsoft.com/azure/media-services/latest/content-key-policy-concept).
+
+You would need to specify the specific content key identifier and key value needed when you are creating the **[StreamingLocator](https://docs.microsoft.com/azure/media-services/latest/streaming-locators-concept)**.
+
+Note that it is possible to delete the v2 locator using the v3 API, but this will not remove the content key or the content key policy used if they were created in the v2 API.
+
+## Using AMSE v2 and AMSE v3 side by side
+
+When migrating your content from v2 to v3, it is advised to install the [v2 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer/releases/tag/v4.3.15.0) along with the [v3 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer) to help compare the data that they show side by side for an Asset that is created and published via v2 APIs. The properties should all be visible, but in slightly different locations now.
++ ## Content protection concepts, tutorials and how to guides ### Concepts
See content protection concepts, tutorials and how to guides below for specific
## Samples You can also [compare the V2 and V3 code in the code samples](migrate-v-2-v-3-migration-samples.md).+
+## Tools
+
+- [v3 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer)
+- [v2 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer/releases/tag/v4.3.15.0)
media-services Offline Fairplay For Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/offline-fairplay-for-ios.md
Title: Media Services v3 offline FairPlay Streaming for iOS description: This topic gives an overview and shows how to use Azure Media Services v3 to dynamically encrypt your HTTP Live Streaming (HLS) content with Apple FairPlay in offline mode.
-keywords: HLS, DRM, FairPlay Streaming (FPS), Offline, iOS 10
-+ - Previously updated : 08/31/2020-- Last updated : 03/25/2021+ # Offline FairPlay Streaming for iOS with Media Services v3
Three test samples in Media Services cover the following three scenarios:
You can find these samples at [this demo site](https://aka.ms/poc#22), with the corresponding application certificate hosted in an Azure web app. With either the version 3 or version 4 sample of the FPS Server SDK, if a master playlist contains alternate audio, during offline mode it plays audio only. Therefore, you need to strip the alternate audio. In other words, the second and third samples listed previously work in online and offline mode. The sample listed first plays audio only during offline mode, while online streaming works properly.
-## FAQ
+## Offline Fairplay questions
-See [frequently asked questions provide assistance with troubleshooting](frequently-asked-questions.md#why-does-only-audio-play-but-not-video-during-offline-mode).
-
-## Next steps
-
-Check out how to [protect with AES-128](protect-with-aes128.md)
+See [offline fairplay questions](questions-collection.md#why-does-only-audio-play-but-not-video-during-offline-mode).
media-services Offline Plaready Streaming For Windows 10 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/offline-plaready-streaming-for-windows-10.md
Azure Media Services support offline download/playback with DRM protection. This
> [!NOTE] > Offline DRM is only billed for making a single request for a license when you download the content. Any errors are not billed.
-## Overview
+## Background on offline mode playback
This section gives some background on offline mode playback, especially why:
You can use two types of PlayReady license delivery:
Below are two sets of test assets, the first one using PlayReady license delivery in AMS while the second one using my PlayReady license server hosted on an Azure VM:
-Asset #1:
+## Asset #1
* Progressive download URL: [https://willzhanmswest.streaming.mediaservices.windows.net/8d078cf8-d621-406c-84ca-88e6b9454acc/20150807-bridges-2500_H264_1644kbps_AAC_und_ch2_256kbps.mp4](https://willzhanmswest.streaming.mediaservices.windows.net/8d078cf8-d621-406c-84ca-88e6b9454acc/20150807-bridges-2500_H264_1644kbps_AAC_und_ch2_256kbps.mp4) * PlayReady LA_URL (AMS): `https://willzhanmswest.keydelivery.mediaservices.windows.net/PlayReady/`
-Asset #2:
+## Asset #2
* Progressive download URL: [https://willzhanmswest.streaming.mediaservices.windows.net/7c085a59-ae9a-411e-842c-ef10f96c3f89/20150807-bridges-2500_H264_1644kbps_AAC_und_ch2_256kbps.mp4](https://willzhanmswest.streaming.mediaservices.windows.net/7c085a59-ae9a-411e-842c-ef10f96c3f89/20150807-bridges-2500_H264_1644kbps_AAC_und_ch2_256kbps.mp4) * PlayReady LA_URL (on premises): `https://willzhan12.cloudapp.net/playready/rightsmanager.asmx`
In summary, we have achieved offline mode on Azure Media
* Content can be hosted in Azure Media Services or Azure Storage for progressive download; * PlayReady license delivery can be from Azure Media Services or elsewhere; * The prepared smooth streaming content can still be used for online streaming via DASH or smooth with PlayReady as the DRM.-
-## Next steps
-
-[Design of a multi-DRM content protection system with access control](design-multi-drm-system-with-access-control.md)
media-services Offline Widevine For Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/offline-widevine-for-android.md
Title: Stream Widevine Android offline description: This topic shows how to configure your Azure Media Services v3 account for offline streaming of Widevine protected content.
-keywords: DASH, DRM, Widevine Offline Mode, ExoPlayer, Android
-+ Previously updated : 08/31/2020-- Last updated : 03/25/2021+ # Offline Widevine streaming for Android with Media Services v3
The article also answers some common questions related to offline streaming of W
> [!NOTE] > Offline DRM is only billed for making a single request for a license when you download the content. Any errors are not billed.
-## Prerequisites
+## Prerequisites
Before implementing offline DRM for Widevine on Android devices, you should first:
The above open-source PWA app is authored in Node.js. If you want to host your o
- The certificate must have trusted CA and a self-signed development certificate does not work - The certificate must have a CN matching the DNS name of the web server or gateway
-## FAQs
+## More information
-For more information, see [Widevine FAQs](frequently-asked-questions.md#widevine-streaming-for-android).
-
-## Additional notes
+For more information, see [Widevine in the Questions Collection](questions-collection.md#widevine-streaming-for-android).
Widevine is a service provided by Google Inc. and subject to the terms of service and Privacy Policy of Google, Inc.-
-## Summary
-
-This article discussed how to implement offline mode playback for DASH content protected by Widevine on Android devices. It also answered some common questions related to offline streaming of Widevine protected content.
media-services Questions Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/questions-collection.md
+
+# Mandatory fields. See more on aka.ms/skyeye/meta.
+ Title: Azure Media Services v3 question collection
+description: This article gives answers to a collection of questions about Azure Media Services v3.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 03/25/2021++
+<!-- NOTE this file is temporary and a placeholder until the FAQ file update is completed. -->
+
+# Media Services v3 questions collection
++
+This article gives answers to frequently asked questions about Azure Media Services v3.
+
+## General
+
+### Does Media Services store any customer data outside of the service region?
+
+- Customers attach their own storage accounts to their Azure Media Services account. All asset data is stored in these associated storage accounts and the customer controls the location and replication type of this storage.
+- Additional data associated with the Media Services account (including Content Encryption Keys, token verification keys, JobInputHttp urls, and other entity metadata) is stored in Microsoft owned storage within the region selected for the Media Services account.
+ - Due to [data residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/#more-information) in Brazil South and Southeast Asia, the additional account data is stored in a zone-redundant fashion and is contained in a single region. For Southeast Asia, all the additional account data is stored in Singapore and for Brazil South, the data is stored in Brazil.
+ - In regions other than Brazil South and Southeast Asia, the additional account data may also be stored in Microsoft owned storage in the [paired region](../../best-practices-availability-paired-regions.md).
+- Azure Media Services is a regional service and does not provide [high availability](media-services-high-availability-encoding.md) or data replication. Customers needing these features are highly encouraged to build a solution using Media Services accounts in multiple regions. A sample showing how to build a solution for High Availability with Media Services Video on Demand is available as a guide.
+
+### What are the Azure portal limitations for Media Services v3?
+
+You can use the [Azure portal](https://portal.azure.com/) to manage v3 live events, view v3 assets and jobs, get info about accessing APIs, encrypt content. <br/>For all other management tasks (for example, managing transforms and jobs or analyzing v3 content), use the [REST API](/rest/api/medi#sdks).
+
+If your video was previously uploaded into the Media Services account using Media Services v3 API or the content was generated based on a live output, you will not see the **Encode**, **Analyze**, or **Encrypt** buttons in the Azure portal. Use the Media Services v3 APIs to perform these tasks.
+
+### What Azure roles can perform actions on Azure Media Services resources?
+
+See [Azure role-based access control (Azure RBAC) for Media Services accounts](rbac-overview.md).
+
+### How do I stream to Apple iOS devices?
+
+Make sure you have **(format=m3u8-aapl)** at the end of your path (after the **/manifest** portion of the URL) to tell the streaming origin server to return HTTP Live Streaming (HLS) content for consumption on Apple iOS native devices. For details, see [Delivering content](dynamic-packaging-overview.md).
+
+### What is the recommended method to process videos?
+
+Use [Transforms](/rest/api/medi).
+
+### I uploaded, encoded, and published a video. Why won't the video play when I try to stream it?
+
+One of the most common reasons is that you don't have the streaming endpoint from which you're trying to play back in the Running state.
+
+### How does pagination work?
+
+When you're using pagination, you should always use the next link to enumerate the collection and not depend on a particular page size. For details and examples, see [Filtering, ordering, paging](entities-overview.md).
+
+### What features are not yet available in Azure Media Services v3?
+
+For details, see [the Migration Guide](migrate-v-2-v-3-migration-introduction.md).
+
+### What is the process of moving a Media Services account between subscriptions?
+
+For details, see [Moving a Media Services account between subscriptions](media-services-account-concept.md).
+
+## Live streaming
+
+### How do I stop the live stream after the broadcast is done?
+
+You can approach it from the client side or the server side.
+
+#### Client side
+
+Your web application should prompt the user if they want to end the broadcast as they're closing the browser. This is a browser event that your web application can handle.
+
+#### Server side
+
+You can monitor live events by subscribing to Azure Event Grid events. For more information, see the [EventGrid event schema](monitoring/media-services-event-schemas.md#live-event-types).
+
+You can either:
+
+* [Subscribe](monitoring/reacting-to-media-services-events.md) to the stream-level [Microsoft.Media.LiveEventEncoderDisconnected](monitoring/media-services-event-schemas.md#liveeventencoderdisconnected) events and monitor that no reconnections come in for a while to stop and delete your live event.
+* [Subscribe](monitoring/reacting-to-media-services-events.md) to the track-level [heartbeat](monitoring/media-services-event-schemas.md#liveeventingestheartbeat) events. If all tracks have an incoming bitrate dropping to 0 or the last time stamp is no longer increasing, you can safely shut down the live event. The heartbeat events come in at every 20 seconds for every track, so it might be a bit verbose.
+
+### How do I insert breaks/videos and image slates during a live stream?
+
+Media Services v3 live encoding does not yet support inserting video or image slates during live stream.
+
+You can use a [live on-premises encoder](recommended-on-premises-live-encoders.md) to switch the source video. Many apps provide to ability to switch sources, including Telestream Wirecast, Switcher Studio (on iOS), and OBS Studio (free app).
+
+## Content protection
+
+### Should I use AES-128 clear key encryption or a DRM system?
+
+Customers often wonder whether they should use AES encryption or a DRM system. The main difference between the two systems is that with AES encryption, the content key is transmitted to the client over TLS so that the key is encrypted in transit but without any additional encryption ("in the clear"). As a result, the key that's used to decrypt the content is accessible to the client player and can be viewed in a network trace on the client in plain text. AES-128 clear key encryption is suitable for use cases where the viewer is a trusted party (for example, encrypting corporate videos distributed within a company to be viewed by employees).
+
+DRM systems like PlayReady, Widevine, and FairPlay all provide an additional level of encryption on the key that's used to decrypt the content, compared to an AES-128 clear key. The content key is encrypted to a key protected by the DRM runtime in addition to any transport-level encryption provided by TLS. Additionally, decryption is handled in a secure environment at the operating system level, where it's more difficult for a malicious user to attack. We recommend DRM for use cases where the viewer might not be a trusted party and you need the highest level of security.
+
+### How do I show a video to only users who have a specific permission, without using Azure AD?
+
+You don't have to use any specific token provider such as Azure Active Directory (Azure AD). You can create your own [JWT](https://jwt.io/) provider (so-called Secure Token Service, or STS) by using asymmetric key encryption. In your custom STS, you can add claims based on your business logic.
+
+Make sure that the issuer, audience, and claims all match up exactly between what's in JWT and the `ContentKeyPolicyRestriction` value used in `ContentKeyPolicy`.
+
+For more information, see [Protect your content by using Media Services dynamic encryption](content-protection-overview.md).
+
+### How and where did I get a JWT token before using it to request a license or key?
+
+For production, you need to have Secure Token Service (that is, a web service), which issues a JWT token upon an HTTPS request. For test, you can use the code shown in the `GetTokenAsync` method defined in [Program.cs](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/master/AMSV3Tutorials/EncryptWithDRM/Program.cs).
+
+The player makes a request, after a user is authenticated, to STS for such a token and assigns it as the value of the token. You can use the [Azure Media Player API](https://amp.azure.net/libs/amp/latest/docs/).
+
+For an example of running STS with either a symmetric key or an asymmetric key, see the [JWT tool](https://aka.ms/jwt). For an example of a player based on Azure Media Player using such a JWT token, see the [Azure media test tool](https://aka.ms/amtest). (Expand the **player_settings** link to see the token input.)
+
+### How do I authorize requests to stream videos with AES encryption?
+
+The correct approach is to use Secure Token Service. In STS, depending on the user profile, add different claims (such as "Premium User," "Basic User," "Free Trial User"). With different claims in a JWT, the user can see different contents. For different contents or assets, `ContentKeyPolicyRestriction` will have the corresponding `RequiredClaims` value.
+
+Use Azure Media Services APIs for configuring license/key delivery and encrypting your assets (as shown in [this sample](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/master/AMSV3Tutorials/EncryptWithAES/Program.cs)).
+
+For more information, see:
+
+- [Content protection overview](content-protection-overview.md)
+- [Design of a multi-DRM content protection system with access control](design-multi-drm-system-with-access-control.md)
+
+### Should I use HTTP or HTTPS?
+The ASP.NET MVC player application must support the following:
+
+* User authentication through Azure AD, which is under HTTPS.
+* JWT exchange between the client and Azure AD, which is under HTTPS.
+* DRM license acquisition by the client, which must be under HTTPS if license delivery is provided by Media Services. The PlayReady product suite doesn't mandate HTTPS for license delivery. If your PlayReady license server is outside Media Services, you can use either HTTP or HTTPS.
+
+The ASP.NET player application uses HTTPS as a best practice, so Media Player is on a page under HTTPS. However, HTTP is preferred for streaming, so you need to consider these issues with mixed content:
+
+* The browser doesn't allow mixed content. But plug-ins like Silverlight and the OSMF plug-in for Smooth and DASH do allow it. Mixed content is a security concern because of the threat of the ability to inject malicious JavaScript, which can put customer data at risk. Browsers block this capability by default. The only way to work around it is on the server (origin) side by allowing all domains (regardless of HTTPS or HTTP). This is probably not a good idea either.
+* Avoid mixed content. Both the player application and Media Player should use HTTP or HTTPS. When you're playing mixed content, the SilverlightSS tech requires clearing a mixed-content warning. The FlashSS tech handles mixed content without a mixed-content warning.
+* If your streaming endpoint was created before August 2014, it won't support HTTPS. In this case, create and use a new streaming endpoint for HTTPS.
+
+### What about live streaming?
+
+You can use exactly the same design and implementation to help protect live streaming in Media Services by treating the asset associated with a program as a VOD asset. To provide a multi-DRM protection of the live content, apply the same setup/processing to the asset as if it were a VOD asset before you associate the asset with the live output.
+
+### What about license servers outside Media Services?
+
+Often, customers have invested in a license server farm either in their own datacenter or in one hosted by DRM service providers. With Media Services content protection, you can operate in hybrid mode. Content can be hosted and dynamically protected in Media Services, while DRM licenses are delivered by servers outside Media Services. In this case, consider the following changes:
+
+* STS needs to issue tokens that are acceptable and can be verified by the license server farm. For example, the Widevine license servers provided by Axinom require a specific JWT that contains an entitlement message. You need to have an STS to issue such a JWT.
+* You no longer need to configure license delivery service in Media Services. You need to provide the license acquisition URLs (for PlayReady, Widevine, and FairPlay) when you configure `ContentKeyPolicy`.
+
+> [!NOTE]
+> Widevine is a service provided by Google and subject to the terms of service and privacy policy of Google.
+
+## Media Services v2 vs. v3
+
+### Can I use the Azure portal to manage v3 resources?
+
+Currently, you can use the [Azure portal](https://portal.azure.com/) to:
+
+* Manage [Live Events](live-events-outputs-concept.md) in Media Services v3.
+* View (not manage) v3 [assets](assets-concept.md).
+* [Get info about accessing APIs](./access-api-howto.md).
+
+For all other management tasks (for example, [Transforms and Jobs](transforms-jobs-concept.md) and [content protection](content-protection-overview.md)), use the [REST API](/rest/api/medi#sdks).
+
+### Is there an AssetFile concept in v3?
+
+The `AssetFile` concept was removed from the Media Services API to separate Media Services from Storage SDK dependency. Now Azure Storage, not Media Services, keeps the information that belongs in the Storage SDK.
+
+For more information, see [Migrate to Media Services v3](migrate-v-2-v-3-migration-introduction.md).
+
+### Where did client-side storage encryption go?
+
+We now recommend that you use server-side storage encryption (which is on by default). For more information, see [Azure Storage Service Encryption for data at rest](../../storage/common/storage-service-encryption.md).
+
+## Offline streaming
+
+### FairPlay Streaming for iOS
+
+The following frequently asked questions provide assistance with troubleshooting offline FairPlay streaming for iOS.
+
+#### Why does only audio play but not video during offline mode?
+
+This behavior seems to be by design of the sample app. When an alternate audio track is present (which is the case for HLS) during offline mode, both iOS 10 and iOS 11 default to the alternate audio track. To compensate this behavior for FPS offline mode, remove the alternate audio track from the stream. To do this on Media Services, add the dynamic manifest filter **audio-only=false**. In other words, an HLS URL ends with **.ism/manifest(format=m3u8-aapl,audio-only=false)**.
+
+#### Why does it still play audio only without video during offline mode after I add audio-only=false?
+
+Depending on the cache key design for the content delivery network, the content might be cached. Purge the cache.
+
+#### Is FPS offline mode supported on iOS 11 in addition to iOS 10?
+
+Yes. FPS offline mode is supported for iOS 10 and iOS 11.
+
+#### Why can't I find the document "Offline Playback with FairPlay Streaming and HTTP Live Streaming" in the FPS Server SDK?
+
+Since FPS Server SDK version 4, this document was merged into the "FairPlay Streaming Programming Guide."
+
+#### What is the downloaded/offline file structure on iOS devices?
+
+The downloaded file structure on an iOS device looks like the following screenshot. The `_keys` folder stores downloaded FPS licenses, with one store file for each license service host. The `.movpkg` folder stores audio and video content.
+
+The first folder with a name that ends with a dash followed by a number contains video content. The numeric value is the peak bandwidth of the video renditions. The second folder with a name that ends with a dash followed by 0 contains audio content. The third folder named `Data` contains the master playlist of the FPS content. Finally, boot.xml provides a complete description of the `.movpkg` folder content.
+
+![Offline file structure for the FairPlay iOS sample app](media/offline-fairplay-for-ios/offline-fairplay-file-structure.png)
+
+Here's a sample boot.xml file:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<HLSMoviePackage xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns="http://apple.com/IMG/Schemas/HLSMoviePackage" xsi:schemaLocation="http://apple.com/IMG/Schemas/HLSMoviePackage /System/Library/Schemas/HLSMoviePackage.xsd">
+ <Version>1.0</Version>
+ <HLSMoviePackageType>PersistedStore</HLSMoviePackageType>
+ <Streams>
+ <Stream ID="1-4DTFY3A3VDRCNZ53YZ3RJ2NPG2AJHNBD-0" Path="1-4DTFY3A3VDRCNZ53YZ3RJ2NPG2AJHNBD-0" NetworkURL="https://willzhanmswest.streaming.mediaservices.windows.net/e7c76dbb-8e38-44b3-be8c-5c78890c4bb4/MicrosoftElite01.ism/QualityLevels(127000)/Manifest(aac_eng_2_127,format=m3u8-aapl)">
+ <Complete>YES</Complete>
+ </Stream>
+ <Stream ID="0-HC6H5GWC5IU62P4VHE7NWNGO2SZGPKUJ-310656" Path="0-HC6H5GWC5IU62P4VHE7NWNGO2SZGPKUJ-310656" NetworkURL="https://willzhanmswest.streaming.mediaservices.windows.net/e7c76dbb-8e38-44b3-be8c-5c78890c4bb4/MicrosoftElite01.ism/QualityLevels(161000)/Manifest(video,format=m3u8-aapl)">
+ <Complete>YES</Complete>
+ </Stream>
+ </Streams>
+ <MasterPlaylist>
+ <NetworkURL>https://willzhanmswest.streaming.mediaservices.windows.net/e7c76dbb-8e38-44b3-be8c-5c78890c4bb4/MicrosoftElite01.ism/manifest(format=m3u8-aapl,audio-only=false)</NetworkURL>
+ </MasterPlaylist>
+ <DataItems Directory="Data">
+ <DataItem>
+ <ID>CB50F631-8227-477A-BCEC-365BBF12BCC0</ID>
+ <Category>Playlist</Category>
+ <Name>master.m3u8</Name>
+ <DataPath>Playlist-master.m3u8-CB50F631-8227-477A-BCEC-365BBF12BCC0.data</DataPath>
+ <Role>Master</Role>
+ </DataItem>
+ </DataItems>
+</HLSMoviePackage>
+```
+
+### Widevine streaming for Android
+
+#### How can I deliver persistent licenses (offline enabled) for some clients/users and non-persistent licenses (offline disabled) for others? Do I have to duplicate the content and use separate content keys?
+
+Because Media Services v3 allows an asset to have multiple `StreamingLocator` instances, you can have:
+
+* One `ContentKeyPolicy` instance with `license_type = "persistent"`, `ContentKeyPolicyRestriction` with claim on `"persistent"`, and its `StreamingLocator`.
+* Another `ContentKeyPolicy` instance with `license_type="nonpersistent"`, `ContentKeyPolicyRestriction` with claim on `"nonpersistent`", and its `StreamingLocator`.
+* Two `StreamingLocator` instances that have different `ContentKey` values.
+
+Depending on business logic of custom STS, different claims are issued in the JWT token. With the token, only the corresponding license can be obtained and only the corresponding URL can be played.
+
+#### What is the mapping between the Widevine and Media Services DRM security levels?
+
+Google's "Widevine DRM Architecture Overview" defines three security levels. However, the [Azure Media Services documentation on the Widevine license template](widevine-license-template-overview.md) outlines
+five security levels (client robustness requirements for playback). This section explains how the security levels map.
+
+Both sets of security levels are defined by Google Widevine. The difference is in usage level: architecture or API. The five security levels are used in the Widevine API. The `content_key_specs` object, which
+contains `security_level`, is deserialized and passed to the Widevine global delivery service by the Azure Media Services Widevine license service. The following table shows the mapping between the two sets of security levels.
+
+| **Security levels defined in Widevine architecture** |**Security levels used in Widevine API**|
+|||
+| **Security Level 1**: All content processing, cryptography, and control are performed within the Trusted Execution Environment (TEE). In some implementation models, security processing might be performed in different chips.|**security_level=5**: The crypto, decoding, and all handling of the media (compressed and uncompressed) must be handled within a hardware-backed TEE.<br/><br/>**security_level=4**: The crypto and decoding of content must be performed within a hardware-backed TEE.|
+**Security Level 2**: Cryptography (but not video processing) is performed within the TEE. Decrypted buffers are returned to the application domain and processed through separate video hardware or software. At Level 2, however, cryptographic information is still processed only within the TEE.| **security_level=3**: The key material and crypto operations must be performed within a hardware-backed TEE. |
+| **Security Level 3**: There's no TEE on the device. Appropriate measures can be taken to protect the cryptographic information and decrypted content on host operating system. A Level 3 implementation might also include a hardware cryptographic engine, but that enhances only performance, not security. | **security_level=2**: Software crypto and an obfuscated decoder are required.<br/><br/>**security_level=1**: Software-based white-box crypto is required.|
+
+#### Why does content download take so long?
+
+There are two ways to improve download speed:
+
+* Enable a content delivery network so that users are more likely to hit that instead of the origin/streaming endpoint for content download. If a user hits a streaming endpoint, each HLS segment or DASH fragment is dynamically packaged and encrypted. Even though this latency is in millisecond scale for each segment or fragment, when you have an hour-long video, the accumulated latency can be large and cause a longer download.
+* Give users the option to selectively download video quality layers and audio tracks instead of all contents. For offline mode, there's no point in downloading all of the quality layers. There are two ways to achieve this:
+
+ * Client controlled: The player app automatically selects, or the user selects, the video quality layer and the audio tracks to download.
+ * Service controlled: You can use the Dynamic Manifest feature in Azure Media Services to create a (global) filter, which limits HLS playlist or DASH MPD to a single video quality layer and selected audio tracks. Then the download URL presented to users will include this filter.
+
+## Next steps
+
+[Media Services v3 overview](media-services-overview.md)
media-services Media Services Encode With Premium Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-encode-with-premium-workflow.md
This article demonstrates how to encode with **Media Encoder Premium Workflow**
Encoding tasks for the **Media Encoder Premium Workflow** require a separate configuration file, called a Workflow file. These files have a .workflow extension and are created using the [Workflow Designer](media-services-workflow-designer.md) tool.
-You can also get the default workflow files [here](https://github.com/Azure/azure-media-services-samples/tree/master/Encoding%20Presets/VoD/MediaEncoderPremiumWorkfows). The folder also contains the description of these files.
+You can also get the default workflow files [here](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/media-services/previous/media-services-encode-with-premium-workflow.md
+). The folder also contains the description of these files.
The workflow files need to be uploaded to your Media Services account as an Asset, and this Asset should be passed in to the encoding Task.
You can open a support ticket by navigating to [New support request](https://por
[!INCLUDE [media-services-learning-paths-include](../../../includes/media-services-learning-paths-include.md)] ## Provide feedback
media-services Media Services Retry Logic In Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-retry-logic-in-dotnet-sdk.md
The following table describes exceptions that the Media Services SDK for .NET ha
| IOException |No |Yes |No |No | ### <a name="WebExceptionStatus"></a> WebException status codes
-The following table shows for which WebException error codes the retry logic is implemented. The [WebExceptionStatus](/dotnet/api/system.net.webexceptionstatus?view=netcore-3.1) enumeration defines the status codes.
+The following table shows for which WebException error codes the retry logic is implemented. The [WebExceptionStatus](/dotnet/api/system.net.webexceptionstatus) enumeration defines the status codes.
| Status | Web Request | Storage | Query | SaveChanges | | | | | | |
media-services Media Services Workflow Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-workflow-designer.md
This tool can also be used to modify any of our [published workflows](media-serv
Once a workflow file is created, it can be uploaded as an Asset, and then be used for encoding media files. For information on how to encode with **Media Encoder Premium Workflow** using **.NET**, see [Advanced encoding with Media Encoder Premium Workflow](media-services-encode-with-premium-workflow.md). ## <a id="existing_workflows"></a>Modify existing workflows
-The default [published workflows](media-services-workflow-designer.md#existing_workflows) can be modified using the designer tool. You can get the default workflow files [here](https://github.com/Azure/azure-media-services-samples/tree/master/Encoding%20Presets/VoD/MediaEncoderPremiumWorkfows). The folder also contains the description of these files.
+The default [published workflows](media-services-workflow-designer.md#existing_workflows) can be modified using the designer tool. You can get the default workflow files [here](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/media-services/previous/media-services-encode-with-premium-workflow.md). The folder also contains the description of these files.
The following videos demonstrate how to use the designer.
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-django-aks-database.md
Quit the server with CONTROL-C.
## Clean up the resources
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group&preserve-view=true#az_group_delete) command to remove the resource group, container service, and all related resources.
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group?view=azure-cli-latest#az_group_delete) command to remove the resource group, container service, and all related resources.
```azurecli-interactive az group delete --name django-project --yes --no-wait
az group delete --name django-project --yes --no-wait
- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md) - Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md) - Learn how to manage your [postgres flexible server](./quickstart-create-server-cli.md)-- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.
+- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.
postgresql Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-configure-privatelink-cli.md
Connect to the VM *myVm* from the internet as follows:
Address: 10.1.3.4 ```
-3. Test the private link connection for the PostgreSQL server using any available client. The following example uses [Azure Data studio](/sql/azure-data-studio/download?view=sql-server-ver15&preserve-view=true) to do the operation.
+3. Test the private link connection for the PostgreSQL server using any available client. The following example uses [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
4. In **New connection**, enter or select this information:
postgresql Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-configure-privatelink-portal.md
After you've created **myVm**, connect to it from the internet as follows:
Address: 10.1.3.4 ```
-3. Test the private link connection for the PostgreSQL server using any available client. In the example below I have used [Azure Data studio](/sql/azure-data-studio/download?view=sql-server-ver15&preserve-view=true) to do the operation.
+3. Test the private link connection for the PostgreSQL server using any available client. In the example below I have used [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
4. In **New connection**, enter or select this information:
postgresql Howto Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-migrate-from-oracle.md
For additional assistance with completing this migration scenario, please see th
| [Oracle to Azure PostgreSQL Migration Workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf) | This document purpose is to provide Architects, Consultants, DBAs, and related roles with a guide for quick fixing / working around issues while migrating workloads from Oracle to Azure Database for PostgreSQL. | | [Steps to Install ora2pg on Windows or Linux](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Steps%20to%20Install%20ora2pg%20on%20Windows%20and%20Linux.pdf) | This document is meant to be used as a Quick Installation Guide for enabling migration of schema & data from Oracle to Azure Database for PostgreSQL using ora2pg tool on Windows or Linux. Complete details on the tool can be found at http://ora2pg.darold.net/documentation.html. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
### Contact support
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/inspect-traffic-with-azure-firewall.md
Create three virtual networks and their corresponding subnets to:
Replace the following parameters in the steps with the information below: ### Azure Firewall network+ | Parameter | Value | |--|-| | **\<resource-group-name>** | myResourceGroup |
Replace the following parameters in the steps with the information below:
| **\<subnet-address-range>** | 10.0.0.0/24 | ### Virtual machine network+ | Parameter | Value | |--|-| | **\<resource-group-name>** | myResourceGroup |
Replace the following parameters in the steps with the information below:
| **\<subnet-address-range>** | 10.1.0.0/24 | ### Private endpoint network+ | Parameter | Value | |--|-| | **\<resource-group-name>** | myResourceGroup | | **\<virtual-network-name>** | myPEVNet | | **\<region-name>** | South Central US | | **\<IPv4-address-space>** | 10.2.0.0/16 |
-| **\<subnet-name>** | PrivateEndpointSubnet | |
+| **\<subnet-name>** | PrivateEndpointSubnet |
| **\<subnet-address-range>** | 10.2.0.0/24 | [!INCLUDE [virtual-networks-create-new](../../includes/virtual-networks-create-new.md)]
In this section, you'll connect privately to the SQL Database using the private
Address: 10.2.0.4 ```
-2. Install [SQL Server command-line tools](/sql/linux/quickstart-install-connect-ubuntu?view=sql-server-ver15#tools).
+2. Install [SQL Server command-line tools](/sql/linux/quickstart-install-connect-ubuntu#tools).
3. Run the following command to connect to the SQL Server. Use the server admin and password you defined when you created the SQL Server in the previous steps.
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database.md
The service principal or managed identity must have permission to get metadata f
``` > [!Note]
- > The `Username` is your own service principal or Purview's managed identity. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15&preserve-view=true#fixed-database-roles).
+ > The `Username` is your own service principal or Purview's managed identity. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
##### Add service principal to key vault and Purview's credential
search Cognitive Search Custom Skill Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-custom-skill-scale.md
Custom skills are web APIs that implement a specific interface. A custom skill c
+ Review the [custom skill interface](cognitive-search-custom-skill-interface.md) for an introduction into the input/output interface that a custom skill should implement.
-+ Set up your environment. You could start with [this tutorial end-to-end](/python/tutorial-vs-code-serverless-python-01) to set up serverless Azure Function using Visual Studio Code and Python extensions.
++ Set up your environment. You could start with [this tutorial end-to-end](/azure/azure-functions/create-first-function-vs-code-python) to set up serverless Azure Function using Visual Studio Code and Python extensions. ## Skillset configuration
search Search Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-logs.md
Last updated 06/30/2020
# Collect and analyze log data for Azure Cognitive Search
-Diagnostic or operational logs provide insight into the detailed operations of Azure Cognitive Search and are useful for monitoring service and workload processes. Internally, some system information exists on the backend for a short period of time, sufficient for investigation and analysis if you file a support ticket. However, if you want self-direction over operational data, you should configure a diagnostic setting to specify where logging information is collected.
+Diagnostic or operational logs provide insight into the detailed operations of Azure Cognitive Search and are useful for monitoring service and workload processes. Internally, Microsoft preserves system information on the backend for a short period of time (about 30 days), sufficient for investigation and analysis if you file a support ticket. However, if you want ownership over operational data, you should configure a diagnostic setting to specify where logging information is collected.
Diagnostic logging is enabled through integration with [Azure Monitor](../azure-monitor/index.yml).
Two tables contain logs and metrics for Azure Cognitive Search: **AzureDiagnosti
1. Enter the following query to return a tabular result set.
- ```
+ ```kusto
AzureMetrics
- | project MetricName, Total, Count, Maximum, Minimum, Average
+ | project MetricName, Total, Count, Maximum, Minimum, Average
``` 1. Repeat the previous steps, starting with **AzureDiagnostics** to return all columns for informational purposes, followed by a more selective query that extracts more interesting information.
- ```
+ ```kusto
AzureDiagnostics | project OperationName, resultSignature_d, DurationMs, Query_s, Documents_d, IndexName_s | where OperationName == "Query.Search"
If you enabled diagnostic logging, you can query **AzureDiagnostics** for a list
Return a list of operations and a count of each one.
-```
+```kusto
AzureDiagnostics | summarize count() by OperationName ```
AzureDiagnostics
Correlate query request with indexing operations, and render the data points across a time chart to see operations coincide.
-```
+```kusto
AzureDiagnostics | summarize OperationName, Count=count() | where OperationName in ('Query.Search', 'Indexing.Index')
search Search Monitor Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-usage.md
The following screenshot helps you locate monitoring information in the portal.
* **Monitoring** tab, on the main Overview page, shows query volume, latency, and whether the service is under pressure. * **Activity log**, in the left navigation pane, is connected to Azure Resource Manager. The activity log reports on actions undertaken by Resource
-* **Monitoring** settings, further down, provides configurable alerts, metrics, and diagnostic logs. Create these when you need them. Once data is collected and stored, you can query or visualize the information for insights.
+* **Monitoring** settings, further down, provides configurable alerts, metrics visualization, and diagnostic logs. Create these when you need them. Once data is collected and stored, you can query or visualize the information for insights.
-![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png
+ ![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png
"Azure Monitor integration in a search service") > [!NOTE]
Azure Monitor has its own billing structure and the diagnostic logs referenced i
## Monitor user access
-Because search indexes are a component of a larger client application, there is no built-in methodology for controlling or monitoring per-user access to an index. Requests are assumed to come from a client application, for either admin or query requests. Admin read-write operations include creating, updating, deleting objects across the entire service. Read-only operations are queries against the documents collection, scoped to a single index.
+Because search indexes are a component of a larger client application, there is no built-in methodology for controlling or monitoring per-user access to an index. Requests are assumed to come from a client application that present either an admin or query request. Admin read-write operations include creating, updating, deleting objects across the entire service. Read-only operations are queries against the documents collection, scoped to a single index.
As such, what you'll see in the activity logs are references to calls using admin keys or query keys. The appropriate key is included in requests originating from client code. The service is not equipped to handle identity tokens or impersonation.
security-center Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/deploy-vulnerability-assessment-vm.md
The vulnerability scanner extension works as follows:
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following URLs to your allow lists (via port 443 - the default for HTTPS): >
- > - https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
+ > - https://www.qualys.com/company/newsroom/news-releases/usa/2017-02-08-qualys-expands-global-cloud-platform-with-three-new-secure-operations-centers/ - Qualys' US data center
> - https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center > > If your machine is in a European Azure region, its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
The Azure Security Center vulnerability assessment extension (powered by Qualys)
During setup, Security Center checks to ensure that the machine can communicate with the following two Qualys data centers (via port 443 - the default for HTTPS): -- https://qagpublic.qg3.apps.qualys.com - Qualys' US data center-- https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
+- https://www.qualys.com/company/newsroom/news-releases/usa/2017-02-08-qualys-expands-global-cloud-platform-with-three-new-secure-operations-centers/ - Qualys' US data center
+- https://www.qualys.com/company/newsroom/news-releases/usa/2017-02-08-qualys-expands-global-cloud-platform-with-three-new-secure-operations-centers/ - Qualys' European data center
The extension doesn't currently accept any proxy configuration details.
Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates
Security Center also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+- Azure Container Registry images - see [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/services-technologies.md
# Security services and technologies available on Azure
-In our discussions with current and future Azure customers, weΓÇÖre often asked ΓÇ£do you have a list of all the security-related services and technologies that Azure has to offer?ΓÇ¥
+In our discussions with current and future Azure customers, we're often asked "do you have a list of all the security-related services and technologies that Azure has to offer?"
-When you evaluate cloud service provider options, itΓÇÖs helpful to have this information. So we have provided this list to get you started.
+When you evaluate cloud service provider options, it's helpful to have this information. So we have provided this list to get you started.
Over time, this list will change and grow, just as Azure does. Make sure to check this page on a regular basis to stay up-to-date on our security-related services and technologies.
Over time, this list will change and grow, just as Azure does. Make sure to chec
| [Azure&nbsp;SQL&nbsp;Firewall](../../azure-sql/database/firewall-configure.md)|A network access control feature that protects against network-based attacks to database. | |[Azure&nbsp;SQL&nbsp;Cell&nbsp;Level Encryption](/archive/blogs/sqlsecurity/recommendations-for-using-cell-level-encryption-in-azure-sql-database)| A database security technology that provides encryption at a granular level. | | [Azure&nbsp;SQL&nbsp;Connection Encryption](../../azure-sql/database/logins-create-manage.md)|To provide security, SQL Database controls access with firewall rules limiting connectivity by IP address, authentication mechanisms requiring users to prove their identity, and authorization mechanisms limiting users to specific actions and data. |
-| [Azure SQL Always Encryption](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-2017)|Protects sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. |
-| [Azure&nbsp;SQL&nbsp;Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql?view=azuresqldb-current)| A database security feature that encrypts the storage of an entire database. |
+| [Azure SQL Always Encryption](/sql/relational-databases/security/encryption/always-encrypted-database-engine)|Protects sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. |
+| [Azure&nbsp;SQL&nbsp;Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)| A database security feature that encrypts the storage of an entire database. |
| [Azure SQL Database Auditing](../../azure-sql/database/auditing-overview.md)|A database auditing feature that tracks database events and writes them to an audit log in your Azure storage account. |
Over time, this list will change and grow, just as Azure does. Make sure to chec
| [Azure Application Proxy](../../active-directory/manage-apps/application-proxy.md)| An authenticating front-end used to secure remote access for web applications hosted on-premises. | |[Azure Firewall](../../firewall/overview.md)|A managed, cloud-based network security service that protects your Azure Virtual Network resources.| |[Azure DDoS protection](../../ddos-protection/ddos-protection-overview.md)|Combined with application design best practices, provides defense against DDoS attacks.|
-|[Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)|Extends your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection.|
+|[Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)|Extends your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection.|
site-recovery Site Recovery Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-sql.md
Your choice of a BCDR technology to recover SQL Server instances should be based
Deployment type | BCDR technology | Expected RTO for SQL Server | Expected RPO for SQL Server | | | |
-SQL Server on an Azure infrastructure as a service (IaaS) virtual machine (VM) or at on-premises.| [Always On availability group](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-2017) | The time taken to make the secondary replica as primary. | Because replication to the secondary replica is asynchronous, there's some data loss.
-SQL Server on an Azure IaaS VM or at on-premises.| [Failover clustering (Always On FCI)](/sql/sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server?view=sql-server-2017) | The time taken to fail over between the nodes. | Because Always On FCI uses shared storage, the same view of the storage instance is available on failover.
-SQL Server on an Azure IaaS VM or at on-premises.| [Database mirroring (high-performance mode)](/sql/database-engine/database-mirroring/database-mirroring-sql-server?view=sql-server-2017) | The time taken to force the service, which uses the mirror server as a warm standby server. | Replication is asynchronous. The mirror database might lag somewhat behind the principal database. The lag is typically small. But it can become large if the principal or mirror server's system is under a heavy load.<br/><br/>Log shipping can be a supplement to database mirroring. It's a favorable alternative to asynchronous database mirroring.
+SQL Server on an Azure infrastructure as a service (IaaS) virtual machine (VM) or at on-premises.| [Always On availability group](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) | The time taken to make the secondary replica as primary. | Because replication to the secondary replica is asynchronous, there's some data loss.
+SQL Server on an Azure IaaS VM or at on-premises.| [Failover clustering (Always On FCI)](/sql/sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server) | The time taken to fail over between the nodes. | Because Always On FCI uses shared storage, the same view of the storage instance is available on failover.
+SQL Server on an Azure IaaS VM or at on-premises.| [Database mirroring (high-performance mode)](/sql/database-engine/database-mirroring/database-mirroring-sql-server) | The time taken to force the service, which uses the mirror server as a warm standby server. | Replication is asynchronous. The mirror database might lag somewhat behind the principal database. The lag is typically small. But it can become large if the principal or mirror server's system is under a heavy load.<br/><br/>Log shipping can be a supplement to database mirroring. It's a favorable alternative to asynchronous database mirroring.
SQL as platform as a service (PaaS) on Azure.<br/><br/>This deployment type includes single databases and elastic pools. | Active geo-replication | 30 seconds after failover is triggered.<br/><br/>When failover is activated for one of the secondary databases, all other secondaries are automatically linked to the new primary. | RPO of five seconds.<br/><br/>Active geo-replication uses the Always On technology of SQL Server. It asynchronously replicates committed transactions on the primary database to a secondary database by using snapshot isolation.<br/><br/>The secondary data is guaranteed to never have partial transactions. SQL as PaaS configured with active geo-replication on Azure.<br/><br/>This deployment type includes a managed instances, elastic pools, and single databases. | Auto-failover groups | RTO of one hour. | RPO of five seconds.<br/><br/>Auto-failover groups provide the group semantics on top of active geo-replication. But the same asynchronous replication mechanism is used. SQL Server on an Azure IaaS VM or at on-premises.| Replication with Azure Site Recovery | RTO is typically less than 15 minutes. To learn more, read the [RTO SLA provided by Site Recovery](https://azure.microsoft.com/support/legal/sla/site-recovery/v1_2/). | One hour for application consistency and five minutes for crash consistency. If you are looking for lower RPO, use other BCDR technologies.
BCDR technologies Always On, active geo-replication, and auto-failover groups ha
1. Import the scripts to fail over SQL Availability Group in both a [Resource Manager virtual machine](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/asr-automation-recovery/scripts/ASR-SQL-FailoverAG.ps1) and a [classic virtual machine](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/asr-automation-recovery/scripts/ASR-SQL-FailoverAGClassic.ps1). Import the scripts into your Azure Automation account.
- [![Image of a "Deploy to Azure" logo](https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/c4803408-340e-49e3-9a1f-0ed3f689813d.png)](https://aka.ms/asr-automationrunbooks-deploy)
+ [![Image of a "Deploy to Azure" logo](https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/c4803408-340e-49e3-9a1f-0ed3f689813d.png)](https://aka.ms/asr-automationrunbooks-deploy)
1. Add the ASR-SQL-FailoverAG script as a pre-action of the first group of the recovery plan.
BCDR technologies Always On, active geo-replication, and auto-failover groups ha
### Step 4: Conduct a test failover
-Some BCDR technologies such as SQL Always On donΓÇÖt natively support test failover. We recommend the following approach *only when using such technologies*.
+Some BCDR technologies such as SQL Always On don't natively support test failover. We recommend the following approach *only when using such technologies*.
1. Set up [Azure Backup](../backup/backup-azure-vms-first-look-arm.md) on the VM that hosts the availability group replica in Azure. 1. Before triggering test failover of the recovery plan, recover the VM from the backup taken in the previous step.
- ![Screenshot showing window for restoring a configuration from Azure Backup](./media/site-recovery-sql/restore-from-backup.png)
+ ![Screenshot showing window for restoring a configuration from Azure Backup](./media/site-recovery-sql/restore-from-backup.png)
1. [Force a quorum](/sql/sql-server/failover-clusters/windows/force-a-wsfc-cluster-to-start-without-a-quorum#PowerShellProcedure) in the VM that was restored from backup. 1. Update the IP address of the listener to be an address available in the test failover network.
- ![Screenshot of rules window and IP address properties dialog](./media/site-recovery-sql/update-listener-ip.png)
+ ![Screenshot of rules window and IP address properties dialog](./media/site-recovery-sql/update-listener-ip.png)
1. Bring the listener online.
- ![Screenshot of window labeled Content_AG showing server names and statuses](./media/site-recovery-sql/bring-listener-online.png)
+ ![Screenshot of window labeled Content_AG showing server names and statuses](./media/site-recovery-sql/bring-listener-online.png)
1. Ensure that the load balancer in the failover network has one IP address, from the front-end IP address pool that corresponding to each availability group listener, and with the SQL Server VM in the back-end pool.
- ![Screenshot of window titled "SQL-AlwaysOn-LB - Frontend IP Pool](./media/site-recovery-sql/create-load-balancer1.png)
+ ![Screenshot of window titled "SQL-AlwaysOn-LB - Frontend IP Pool](./media/site-recovery-sql/create-load-balancer1.png)
- ![Screenshot of window titled "SQL-AlwaysOn-LB - Backend IP Pool](./media/site-recovery-sql/create-load-balancer2.png)
+ ![Screenshot of window titled "SQL-AlwaysOn-LB - Backend IP Pool](./media/site-recovery-sql/create-load-balancer2.png)
1. In later recovery groups, add failover of your application tier followed by your web tier for this recovery plan.
site-recovery Upgrade 2012R2 To 2016 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/upgrade-2012R2-to-2016.md
Before you upgrade, note the following:-
- Check the service accounts being used for System Center Virtual Machine Manager Agent service - Make sure that you have a backup of the VMM Database. - Note down the database name of the SCVMM servers involved. This can be done by navigating to **VMM console** -> **Settings** -> **General** -> **Database connection**
- - Note down the VMM ID of both the 2012R2 primary and recovery VMM servers. VMM ID can be found from the registry "HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\SetupΓÇ¥.
+ - Note down the VMM ID of both the 2012R2 primary and recovery VMM servers. VMM ID can be found from the registry "HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup".
- Ensure that you the new SCVMMs that you add to the cluster has the same names as was before. - If you are replicating between two of your sites managed by SCVMMs on both sides, ensure that you upgrade your recovery side first before you upgrade the primary side.
Before you upgrade, note the following:-
> While upgrading the SCVMM 2012 R2, under Distributed Key Management, select to **store encryption keys in Active Directory**. Choose the settings for the service account and distributed key management carefully. Based on your selection, encrypted data such as passwords in templates might not be available after the upgrade, and can potentially affect replication with Azure Site Recovery > [!IMPORTANT]
-> Please refer to the detailed SCVMM documentation of [prerequisites](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016#requirements-and-limitations)
+> Please refer to the detailed SCVMM documentation of [prerequisites](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#requirements-and-limitations)
## Windows Server 2012 R2 hosts which aren't managed by SCVMM The list of steps mentioned below applies to the user configuration from [Hyper-V hosts to Azure](./hyper-v-azure-architecture.md) executed by following this [tutorial](./hyper-v-prepare-on-premises-tutorial.md)
The list of steps mentioned below applies to the user configuration from [Hyper-
2. With every new Windows Server 2016 host that is introduced in the cluster, remove the reference of a Windows Server 2012 R2 host from Azure Site Recovery by following steps mentioned [here]. This should be the host you chose to drain & evict from the cluster. 3. Once the *Update-VMVersion* command has been executed for all virtual machines, the upgrades have been completed. 4. Use the steps mentioned [here](./hyper-v-azure-tutorial.md#set-up-the-source-environment) to register the new Windows Server 2016 host to Azure Site Recovery. Please note that the Hyper-V site is already active and you just need to register the new host in the cluster.
-5. Go to Azure portal and verify the replicated health status inside the Recovery Services
+5. Go to the Azure portal and verify the replicated health status inside the Recovery Services
## Upgrade Windows Server 2012 R2 hosts managed by stand-alone SCVMM 2012 R2 server Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SCVMM 2012 R2 to SCVMM 2016. Follow the below steps:-
Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SC
**Upgrade standalone SCVMM 2012 R2 to SCVMM 2016** 1. Uninstall ASR provider by navigating to Control Panel -> Programs -> Programs and Features ->Microsoft Azure Site Recovery , and click on Uninstall
-2. [Retain the SCVMM database and upgrade the operating system](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016#back-up-and-upgrade-the-operating-system)
+2. [Retain the SCVMM database and upgrade the operating system](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#back-up-and-upgrade-the-operating-system)
3. In **Add remove programs**, select **VMM** > **Uninstall**. b. Select **Remove Features**, and then select V**MM management Server and VMM Console**. c. In **Database Options**, select **Retain database**. d. Review the summary and click **Uninstall**.
-4. [Install VMM 2016](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016#install-vmm-2016)
-5. Launch SCVMM and check status of each hosts under **Fabrics** tab. Click **Refresh** to get the most recent status. You should see status as ΓÇ£Needs AttentionΓÇ¥.
-17. Install the latest [Microsoft Azure Site Recovery Provider](https://aka.ms/downloaddra) on the SCVMM.
-16. Install the latest [Microsoft Azure Recovery Service (MARS) agent](https://aka.ms/latestmarsagent) on each host of the cluster. Refresh to ensure SCVMM is able to successfully query the hosts.
+4. [Install VMM 2016](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#install-vmm-2016)
+5. Launch SCVMM and check status of each hosts under **Fabrics** tab. Click **Refresh** to get the most recent status. You should see status as "Needs Attention".
+17. Install the latest [Microsoft Azure Site Recovery Provider](https://aka.ms/downloaddra) on the SCVMM.
+16. Install the latest [Microsoft Azure Recovery Service (MARS) agent](https://aka.ms/latestmarsagent) on each host of the cluster. Refresh to ensure SCVMM is able to successfully query the hosts.
**Upgrade Windows Server 2012 R2 hosts to Windows Server 2016** 1. Follow the steps mentioned [here](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to execute the rolling cluster upgrade process. 2. After adding the new host to the cluster, refresh the host from the SCVMM console to install the VMM Agent on this updated host. 3. Execute *Update-VMVersion* to update the VM versions of the Virtual machines.
-4. Go to Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services Vault.
+4. Go to the Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services Vault.
## Upgrade Windows Server 2012 R2 hosts are managed by highly available SCVMM 2012 R2 server Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SCVMM 2012 R2 to SCVMM 2016. The following modes of upgrade are supported while upgrading SCVMM 2012 R2 servers configured with Azure Site Recovery - Mixed mode with no additional VMM servers & Mixed mode with additional VMM servers.
Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SC
**Upgrade SCVMM 2012 R2 to SCVMM 2016** 1. Uninstall ASR provider by navigating to Control Panel -> Programs -> Programs and Features ->Microsoft Azure Site Recovery , and click on Uninstall
-2. Follow the steps mentioned [here](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016#upgrade-a-standalone-vmm-server) based on the mode of upgrade you wish to execute.
-3. Launch SCVMM console and check status of each hosts under **Fabrics** tab. Click **Refresh** to get the most recent status. You should see status as ΓÇ£Needs AttentionΓÇ¥.
+2. Follow the steps mentioned [here](/system-center/vmm/upgrade-vmm?view=sc-vmm-2016&preserve-view=true#upgrade-a-standalone-vmm-server) based on the mode of upgrade you wish to execute.
+3. Launch SCVMM console and check status of each hosts under **Fabrics** tab. Click **Refresh** to get the most recent status. You should see status as "Needs Attention".
4. Install the latest [Microsoft Azure Site Recovery Provider](https://aka.ms/downloaddra) on the SCVMM. 5. Update the latest [Microsoft Azure Recovery Service (MARS) agent](https://aka.ms/latestmarsagent) on each host of the cluster. Refresh to ensure SC VMM is able to successfully query the hosts.
Before you upgrade your Windows Sever 2012 R2 hosts, you need to upgrade the SC
1. Follow the steps mentioned [here](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to execute the rolling cluster upgrade process. 2. After adding the new host to the cluster, refresh the host from the SCVMM console to install the VMM Agent on this updated host. 3. Execute *Update-VMVersion* to update the VM versions of the Virtual machines.
-4. Go to Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services Vault.
+4. Go to the Azure portal and verify the replicated health status of the virtual machines inside the Recovery Services Vault.
## Next steps Once the upgrade of the hosts is performed, you can perform a [test failover](tutorial-dr-drill-azure.md) to test the health of your replication and disaster recovery status.-
spatial-anchors Get Started Xamarin Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/quickstarts/get-started-xamarin-android.md
To complete this quickstart, make sure you have:
- <a href="https://git-scm.com/download/win" target="_blank">Git for Windows</a>. - <a href="https://git-lfs.github.com/">Git LFS</a>. - If using macOS:
- - An up-to-date version of <a href="/visualstudio/mac/installation?view=vsmac-2019" target="_blank">Visual Studio for Mac 8.1+</a>.
+ - An up-to-date version of <a href="/visualstudio/mac/installation?view=vsmac-2019&preserve-view=true" target="_blank">Visual Studio for Mac 8.1+</a>.
- <a href="https://git-scm.com/download/mac" target="_blank">Git for macOS</a>. - <a href="https://git-lfs.github.com/">Git LFS</a>. - The latest version of Xamarin.Android installed and running on your platform of choice. For a guide to installing Xamarin.Android, refer to the [Xamarin.Android Installation](/xamarin/android/get-started/installation/index) guides.
spatial-anchors Get Started Xamarin Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/quickstarts/get-started-xamarin-ios.md
You'll learn how to:
To complete this quickstart, make sure you have: - A Mac running macOS High Sierra (10.13) or above with: - The latest version of Xcode and iOS SDK installed from the [App Store](https://itunes.apple.com/us/app/xcode/id497799835?mt=12).
- - An up-to-date version of <a href="/visualstudio/mac/installation?view=vsmac-2019" target="_blank">Visual Studio for Mac 8.1+</a>.
+ - An up-to-date version of <a href="/visualstudio/mac/installation?view=vsmac-2019&preserve-view=true" target="_blank">Visual Studio for Mac 8.1+</a>.
- <a href="https://git-scm.com/download/mac" target="_blank">Git for macOS</a>. - <a href="https://git-lfs.github.com/">Git LFS</a>.
static-web-apps Publish Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-devops.md
In this tutorial, you learn to:
1. Under _Deployment details_ ensure that you select **Other**. This enables you to use the code in your Azure DevOps repository.
- > [!NOTE]
- > The functionality to select _Other_ is currently rolling out and may not be available yet in all Azure subscriptions.
- :::image type="content" source="media/publish-devops/create-resource.png" alt-text="Deployment details - other"::: 1. Once the deployment is successful, navigate to the new Static Web Apps resource.
storage Data Lake Storage Supported Blob Storage Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
The following table shows how each Blob storage feature is supported with Data L
|Anonymous public access |Generally available|Generally available| See [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).| |Customer-managed account failover|Not yet supported|Not yet supported|[Disaster recovery and account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| |Customer-provided keys|Not yet supported|Not yet supported|[Provide an encryption key on a request to Blob storage](encryption-customer-provided-keys.md)|
-|Encryption scopes|Not yet supported|Not yet supported|[Create and manage encryption scopes (preview)](encryption-scope-manage.md)|
+|Encryption scopes|Not yet supported|Not yet supported|[Create and manage encryption scopes](encryption-scope-manage.md)|
|Change feed|Not yet supported|Not yet supported|[Change feed support in Azure Blob storage](storage-blob-change-feed.md)| |Object replication|Not yet supported|Not yet supported|[Configure object replication for block blobs](object-replication-configure.md)| |Blob versioning|Not yet supported|Not yet supported|[Enable and manage blob versioning](versioning-enable.md)|
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-manage.md
Title: Create and manage encryption scopes (preview)
+ Title: Create and manage encryption scopes
description: Learn how to create an encryption scope to isolate blob data at the container or blob level. Previously updated : 03/05/2021 Last updated : 03/26/2021
-# Create and manage encryption scopes (preview)
+# Create and manage encryption scopes
-Encryption scopes (preview) enable you to manage encryption at the level of an individual blob or container. An encryption scope isolates blob data in a secure enclave within a storage account. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. For more information about encryption scopes, see [Encryption scopes for Blob storage (preview)](encryption-scope-overview.md).
+Encryption scopes enable you to manage encryption at the level of an individual blob or container. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. For more information about encryption scopes, see [Encryption scopes for Blob storage](encryption-scope-overview.md).
This article shows how to create an encryption scope. It also shows how to specify an encryption scope when you create a blob or container.
-> [!IMPORTANT]
-> Encryption scopes are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To avoid unexpected costs, be sure to disable any encryption scopes that you do not currently need.
- [!INCLUDE [storage-data-lake-gen2-support](../../../includes/storage-data-lake-gen2-support.md)] ## Create an encryption scope
-You can create an encryption scope with a Microsoft-managed key or with a customer-managed key that's stored in Azure Key Vault or Azure Key Vault Managed Hardware Security Model (HSM) (preview). To create an encryption scope with a customer-managed key, you must first create a key vault or managed HSM and add the key you intend to use for the scope. The key vault or managed HSM must have purge protection enabled and must be in the same region as the storage account.
+You can create an encryption scope that is protected with a Microsoft-managed key or with a customer-managed key that is stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM) (preview). To create an encryption scope with a customer-managed key, you must first create a key vault or managed HSM and add the key you intend to use for the scope. The key vault or managed HSM must have purge protection enabled and must be in the same region as the storage account.
An encryption scope is automatically enabled when you create it. After you create the encryption scope, you can specify it when you create a blob. You can also specify a default encryption scope when you create a container, which automatically applies to all blobs in the container.
To create an encryption scope in the Azure portal, follow these steps:
1. Select the **Encryption** setting. 1. Select the **Encryption Scopes** tab. 1. Click the **Add** button to add a new encryption scope.
-1. In the Create **Encryption Scope** pane, enter a name for the new scope.
-1. Select the type of encryption, either **Microsoft-managed keys** or **Customer-managed keys**.
+1. In the **Create Encryption Scope** pane, enter a name for the new scope.
+1. Select the desired type of encryption key support, either **Microsoft-managed keys** or **Customer-managed keys**.
- If you selected **Microsoft-managed keys**, click **Create** to create the encryption scope.
- - If you selected **Customer-managed keys**, specify a key vault or managed HSM, key, and key version to use for this encryption scope, as shown in the following image.
+ - If you selected **Customer-managed keys**, then select a subscription and specify a key vault or a managed HSM and a key to use for this encryption scope, as shown in the following image.
:::image type="content" source="media/encryption-scope-manage/create-encryption-scope-customer-managed-key-portal.png" alt-text="Screenshot showing how to create encryption scope in Azure portal"::: # [PowerShell](#tab/powershell)
-To create an encryption scope with PowerShell, first install the Az.Storage preview module version. Using the latest preview version is recommended, but encryption scopes are supported in version 1.13.4-preview and later. Remove any other versions of the Az.Storage module.
-
-The following command installs Az.Storage [2.1.1-preview](https://www.powershellgallery.com/packages/Az.Storage/2.1.1-preview) module:
-
-```powershell
-Install-Module -Name Az.Storage -RequiredVersion 2.1.1-preview -AllowPrerelease
-```
+To create an encryption scope with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) PowerShell module, version 3.4.0 or later.
### Create an encryption scope protected by Microsoft-managed keys
Remember to replace the placeholder values in the example with your own values:
$rgName = "<resource-group>" $accountName = "<storage-account>" $keyVaultName = "<key-vault>"
-$keyUri = "<key-uri-with-version>"
+$keyUri = "<key-uri>"
$scopeName2 = "customer2scope" - # Assign a system managed identity to the storage account. $storageAccount = Set-AzStorageAccount -ResourceGroupName $rgName ` -Name $accountName `
Set-AzKeyVaultAccessPolicy `
-PermissionsToKeys wrapkey,unwrapkey,get ```
-Next, call the **New-AzStorageEncryptionScope** command with the `-KeyvaultEncryption` parameter, and specify the key URI. Be sure to include the key version on the key URI. Remember to replace the placeholder values in the example with your own values:
+Next, call the **New-AzStorageEncryptionScope** command with the `-KeyvaultEncryption` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+
+Remember to replace the placeholder values in the example with your own values:
```powershell New-AzStorageEncryptionScope -ResourceGroupName $rgName `
New-AzStorageEncryptionScope -ResourceGroupName $rgName `
# [Azure CLI](#tab/cli)
-To create an encryption scope with Azure CLI, first install Azure CLI version 2.4.0 or later.
+To create an encryption scope with Azure CLI, first install Azure CLI version 2.20.0 or later.
### Create an encryption scope protected by Microsoft-managed keys
az keyvault set-policy \
--key-permissions get unwrapKey wrapKey ```
-Next, call the **az storage account encryption-scope create** command with the `--key-uri` parameter, and specify the key URI. Be sure to include the key version on the key URI. Remember to replace the placeholder values in the example with your own values:
+Next, call the **az storage account encryption-scope create** command with the `--key-uri` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+
+Remember to replace the placeholder values in the example with your own values:
```azurecli-interactive az storage account encryption-scope create \
az storage account encryption-scope create \
-To learn how to configure Azure Storage encryption with customer-managed keys in a key vault, see [Configure encryption with customer-managed keys stored in Azure Key Vault](../common/customer-managed-keys-configure-key-vault.md). To configure customer-managed keys in a managed HSM, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM (preview)](../common/customer-managed-keys-configure-key-vault-hsm.md).
+To learn how to configure Azure Storage encryption with customer-managed keys in a key vault or managed HSM, see the following articles:
+
+- [Configure encryption with customer-managed keys stored in Azure Key Vault](../common/customer-managed-keys-configure-key-vault.md)
+- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM (preview)](../common/customer-managed-keys-configure-key-vault-hsm.md).
## List encryption scopes for storage account
To view the encryption scopes for a storage account in the Azure portal, navigat
:::image type="content" source="media/encryption-scope-manage/list-encryption-scopes-portal.png" alt-text="Screenshot showing list of encryption scopes in Azure portal":::
+To view details for a customer-managed key, including the key URI and version and whether the key version is automatically updated, follow the link in the **Key** column.
++ # [PowerShell](#tab/powershell) To list the encryption scopes available for a storage account with PowerShell, call the **Get-AzStorageEncryptionScope** command. Remember to replace the placeholder values in the example with your own values:
az storage account encryption-scope list \
When you create a container, you can specify a default encryption scope. Blobs in that container will use that scope by default.
-An individual blob can be created with its own encryption scope, unless the container is configured to require that all blobs use its default scope.
+An individual blob can be created with its own encryption scope, unless the container is configured to require that all blobs use the default scope. For more information, see [Encryption scopes for containers and blobs](encryption-scope-overview.md#encryption-scopes-for-containers-and-blobs).
# [Portal](#tab/portal)
To create a container with a default encryption scope in the Azure portal, first
# [PowerShell](#tab/powershell)
-To create a container with a default encryption scope with PowerShell, call the [New-AzRmStorageContainer](/powershell/module/az.storage/new-azrmstoragecontainer) command, specifying the scope for the `-DefaultEncryptionScope` parameter. The **New-AzRmStorageContainer** command creates a container by using the Azure Storage resource provider, which enables configuration of encryption scopes and other resource management operations.
-
-To force all blobs in a container to use the container's default scope, set the `-PreventEncryptionScopeOverride` parameter to `true`.
+To create a container with a default encryption scope with PowerShell, call the [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) command, specifying the scope for the `-DefaultEncryptionScope` parameter. To force all blobs in a container to use the container's default scope, set the `-PreventEncryptionScopeOverride` parameter to `true`.
```powershell $containerName1 = "container1"
-$containerName2 = "container2"
+$ctx = New-AzStorageContext -StorageAccountName $accountName -UseConnectedAccount
# Create a container with a default encryption scope that cannot be overridden.
-New-AzRmStorageContainer -ResourceGroupName $rgName `
- -StorageAccountName $accountName `
- -Name $containerName1 `
+New-AzStorageContainer -Name $containerName1 `
+ -Context $ctx `
-DefaultEncryptionScope $scopeName1 ` -PreventEncryptionScopeOverride $true ```
When you upload a blob, you can specify an encryption scope for that blob, or us
# [Portal](#tab/portal)
-To upload a blob with an encryption scope specified in the Azure portal, first create the encryption scope as described in [Create an encryption scope](#create-an-encryption-scope). Next, follow these steps to create the blob:
+To upload a blob with an encryption scope via the Azure portal, first create the encryption scope as described in [Create an encryption scope](#create-an-encryption-scope). Next, follow these steps to create the blob:
1. Navigate to the container to which you want to upload the blob. 1. Select the **Upload** button, and locate the blob to upload.
To upload a blob with an encryption scope specified in the Azure portal, first c
# [PowerShell](#tab/powershell)
-To upload a blob with an encryption scope specified by using PowerShell, call the [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) command and provide the encryption scope for the blob.
+To upload a blob with an encryption scope via PowerShell, call the [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) command and provide the encryption scope for the blob.
```powershell $containerName2 = "container2" $localSrcFile = "C:\temp\helloworld.txt"
-$ctx = (Get-AzStorageAccount -ResourceGroupName $rgName -StorageAccountName $accountName).Context
+$ctx = New-AzStorageContext -StorageAccountName $accountName -UseConnectedAccount
# Create a new container with no default scope defined. New-AzStorageContainer -Name $containerName2 -Context $ctx+ # Upload a block upload with an encryption scope specified.
-Set-AzStorageBlobContent -Context $ctx -Container $containerName2 -File $localSrcFile -Blob "helloworld.txt" -BlobType Block -EncryptionScope $scopeName2
+Set-AzStorageBlobContent -Context $ctx `
+ -Container $containerName2 `
+ -File $localSrcFile `
+ -Blob "helloworld.txt" `
+ -BlobType Block `
+ -EncryptionScope $scopeName2
``` # [Azure CLI](#tab/cli)
-To upload a blob with an encryption scope specified by using Azure CLI, call the [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) command and provide the encryption scope for the blob.
+To upload a blob with an encryption scope via Azure CLI, call the [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) command and provide the encryption scope for the blob.
If you are using Azure Cloud Shell, follow the steps described in [Upload a blob](storage-quickstart-blobs-cli.md#upload-a-blob) to create a file in the root directory. You can then upload this file to a blob using the following sample.
az storage account encryption-scope update \
--state Disabled ```
+> [!IMPORTANT]
+> It is not possible to delete an encryption scope. To avoid unexpected costs, be sure to disable any encryption scopes that you do not currently need.
+ ## Next steps - [Azure Storage encryption for data at rest](../common/storage-service-encryption.md)-- [Encryption scopes for Blob storage (preview)](encryption-scope-overview.md)
+- [Encryption scopes for Blob storage](encryption-scope-overview.md)
- [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md)
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-overview.md
Title: Encryption scopes for Blob storage (preview)
+ Title: Encryption scopes for Blob storage
description: Encryption scopes provide the ability to manage encryption at the level of the container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. Previously updated : 03/05/2021 Last updated : 03/26/2021
-# Encryption scopes for Blob storage (preview)
+# Encryption scopes for Blob storage
-Encryption scopes provide the ability to manage encryption at the level of the container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers.
+Encryption scopes enable you to manage encryption with a key that is scoped to a container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers.
-By default, a storage account is encrypted with a key that is scoped to the entire storage account. With an encryption scope, you can specify that one or more containers are encrypted with a key that is scoped only to those containers.
+For more information about working with encryption scopes, see [Create and manage encryption scopes](encryption-scope-manage.md).
-You can choose to use either Microsoft-managed keys or customer-managed keys stored in Azure Key Vault to protect and control access to the key that encrypts your data. Different encryption scopes on the same storage account can use either Microsoft-managed or customer-managed keys.
-After you have created an encryption scope, you can specify that encryption scope on a request to create a container or a blob. For more information about how to create an encryption scope, see [Create and manage encryption scopes (preview)](encryption-scope-manage.md).
+## How encryption scopes work
-> [!IMPORTANT]
-> Encryption scopes are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To avoid unexpected costs, be sure to disable any encryption scopes that you do not currently need.
->
-> Encryption scopes are not supported with read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS) accounts during preview.
+By default, a storage account is encrypted with a key that is scoped to the entire storage account. When you define an encryption scope, you specify a key that may be scoped to a container or an individual blob. When the encryption scope is applied to a blob, the blob is encrypted with that key. When the encryption scope is applied to a container, it serves as the default scope for blobs in that container, so that all blobs that are uploaded to that container may be encrypted with the same key. The container can be configured to enforce the default encryption scope for all blobs in the container, or to permit an individual blob to be uploaded to the container with an encryption scope other than the default.
+Read operations on a blob that was created with an encryption scope happen transparently, so long as the encryption scope is not disabled.
+
+### Key management
+
+When you define an encryption scope, you can specify whether the scope is protected with a Microsoft-managed key or with a customer-managed key that is stored in Azure Key Vault. Different encryption scopes on the same storage account can use either Microsoft-managed or customer-managed keys. You can also switch the type of key used to protect an encryption scope from a customer-managed key to a Microsoft-managed key, or vice versa, at any time. For more information about customer-managed keys, see [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md). For more information about Microsoft-managed keys, see [About encryption key management](../common/storage-service-encryption.md#about-encryption-key-management).
+
+If you define an encryption scope with a customer-managed key, then you can choose to update the key version either automatically or manually. If you choose to automatically update the key version, then Azure Storage checks the key vault or managed HSM daily for a new version of the customer-managed key and automatically updates the key to the latest version. For more information about updating the key version for a customer-managed key, see [Update the key version](../common/customer-managed-keys-overview.md#update-the-key-version).
-## Create a container or blob with an encryption scope
+A storage account may have up to 10,000 encryption scopes that are protected with customer-managed keys for which the key version is automatically updated. If your storage account already has 10,000 encryption scopes that are protected with customer-managed keys that are being automatically updated, then the key version must be updated manually for any additional encryption scopes that are protected with customer-managed keys.
-Blobs that are created under an encryption scope are encrypted with the key specified for that scope. You can specify an encryption scope for an individual blob when you create the blob, or you can specify a default encryption scope when you create a container. When a default encryption scope is specified at the level of a container, all blobs in that container are encrypted with the key associated with the default scope.
+### Encryption scopes for containers and blobs
-When you create a blob in a container that has a default encryption scope, you can specify an encryption scope that overrides the default encryption scope if the container is configured to allow overrides of the default encryption scope. To prevent overrides of the default encryption scope, configure the container to deny overrides for an individual blob.
+When you create a container, you can specify a default encryption scope for the blobs that are subsequently uploaded to that container. When you specify a default encryption scope for a container, you can decide how the default encryption scope is enforced:
-Read operations on a blob that belongs to an encryption scope happen transparently, so long as the encryption scope is not disabled.
+- You can require that all blobs uploaded to the container use the default encryption scope. In this case, every blob in the container is encrypted with the same key.
+- You can permit a client to override the default encryption scope for the container, so that a blob may be uploaded with an encryption scope other than the default scope. In this case, the blobs in the container may be encrypted with different keys.
-## Disable an encryption scope
+The following table summarizes the behavior of a blob upload operation, depending on how the default encryption scope is configured for the container:
+
+| The encryption scope defined on the container is… | Uploading a blob with the default encryption scope… | Uploading a blob with an encryption scope other than the default scope… |
+|--|--|--|
+| A default encryption scope with overrides permitted | Succeeds | Succeeds |
+| A default encryption scope with overrides prohibited | Succeeds | Fails |
+
+A default encryption scope must be specified for a container at the time that the container is created.
+
+If no default encryption scope is specified for the container, then you can upload a blob using any encryption scope that you've defined for the storage account. The encryption scope must be specified at the time that the blob is uploaded.
+
+## Disabling an encryption scope
When you disable an encryption scope, any subsequent read or write operations made with the encryption scope will fail with HTTP error code 403 (Forbidden). If you re-enable the encryption scope, read and write operations will proceed normally again. When an encryption scope is disabled, you are no longer billed for it. Disable any encryption scopes that are not needed to avoid unnecessary charges.
-If your encryption scope is protected with customer-managed keys, then you can also delete the associated key in the key vault in order to disable the encryption scope. Keep in mind that customer-managed keys are protected by soft delete and purge protection in the key vault, and a deleted key is subject to the behavior defined for by those properties. For more information, see one of the following topics in the Azure Key Vault documentation:
+If your encryption scope is protected with a customer-managed key, and you delete the key in the key vault, the data will become inaccessible. Be sure to also disable the encryption scope to avoid being charged for it.
+
+Keep in mind that customer-managed keys are protected by soft delete and purge protection in the key vault, and a deleted key is subject to the behavior defined for by those properties. For more information, see one of the following topics in the Azure Key Vault documentation:
- [How to use soft-delete with PowerShell](../../key-vault/general/key-vault-recovery.md) - [How to use soft-delete with CLI](../../key-vault/general/key-vault-recovery.md)
-> [!NOTE]
+> [!IMPORTANT]
> It is not possible to delete an encryption scope. ## Next steps - [Azure Storage encryption for data at rest](../common/storage-service-encryption.md)-- [Create and manage encryption scopes (preview)](encryption-scope-manage.md)
+- [Create and manage encryption scopes](encryption-scope-manage.md)
- [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md) - [What is Azure Key Vault?](../../key-vault/general/overview.md)
storage Storage Quickstart Blobs Xamarin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-xamarin.md
Reference links:
* Azure subscription - [create one for free](https://azure.microsoft.com/free/) * Azure storage account - [create a storage account](../common/storage-account-create.md)
-* Visual Studio with [Mobile Development for .NET workload](/xamarin/get-started/installation/?pivots=windows) installed or [Visual Studio for Mac](/visualstudio/mac/installation?view=vsmac-2019)
+* Visual Studio with [Mobile Development for .NET workload](/xamarin/get-started/installation/?pivots=windows) installed or [Visual Studio for Mac](/visualstudio/mac/installation?view=vsmac-2019&preserve-view=true)
## Setting up
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 03/09/2021 Last updated : 03/23/2021
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-service-encryption.md
Previously updated : 09/17/2020 Last updated : 03/23/2021
Service-level encryption supports the use of either Microsoft-managed keys or cu
For more information about how to create a storage account that enables infrastructure encryption, see [Create a storage account with infrastructure encryption enabled for double encryption of data](infrastructure-encryption-enable.md).
-## Encryption scopes for Blob storage (preview)
-
-By default, a storage account is encrypted with a key that is scoped to the storage account. You can choose to use either Microsoft-managed keys or customer-managed keys stored in Azure Key Vault to protect and control access to the key that encrypts your data.
-
-Encryption scopes enable you to optionally manage encryption at the level of the container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers.
-
-You can create one or more encryption scopes for a storage account using the Azure Storage resource provider. When you create an encryption scope, you specify whether the scope is protected with a Microsoft-managed key or with a customer-managed key that is stored in Azure Key Vault. Different encryption scopes on the same storage account can use either Microsoft-managed or customer-managed keys.
-
-After you have created an encryption scope, you can specify that encryption scope on a request to create a container or a blob. For more information about how to create an encryption scope, see [Create and manage encryption scopes (preview)](../blobs/encryption-scope-manage.md).
-
-> [!NOTE]
-> Encryption scopes are not supported with read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS) accounts during preview.
--
-> [!IMPORTANT]
-> The encryption scopes preview is intended for non-production use only. Production service-level agreements (SLAs) are not currently available.
->
-> To avoid unexpected costs, be sure to disable any encryption scopes that you do not currently need.
-
-### Create a container or blob with an encryption scope
-
-Blobs that are created under an encryption scope are encrypted with the key specified for that scope. You can specify an encryption scope for an individual blob when you create the blob, or you can specify a default encryption scope when you create a container. When a default encryption scope is specified at the level of a container, all blobs in that container are encrypted with the key associated with the default scope.
-
-When you create a blob in a container that has a default encryption scope, you can specify an encryption scope that overrides the default encryption scope if the container is configured to allow overrides of the default encryption scope. To prevent overrides of the default encryption scope, configure the container to deny overrides for an individual blob.
-
-Read operations on a blob that belongs to an encryption scope happen transparently, so long as the encryption scope is not disabled.
-
-### Disable an encryption scope
-
-When you disable an encryption scope, any subsequent read or write operations made with the encryption scope will fail with HTTP error code 403 (Forbidden). If you re-enable the encryption scope, read and write operations will proceed normally again.
-
-When an encryption scope is disabled, you are no longer billed for it. Disable any encryption scopes that are not needed to avoid unnecessary charges.
-
-If your encryption scope is protected with customer-managed keys for Azure Key Vault, then you can also delete the associated key in the key vault in order to disable the encryption scope. Keep in mind that customer-managed keys in Azure Key Vault are protected by soft delete and purge protection, and a deleted key is subject to the behavior defined for by those properties. For more information, see one of the following topics in the Azure Key Vault documentation:
--- [How to use soft-delete with PowerShell](../../key-vault/general/key-vault-recovery.md)-- [How to use soft-delete with CLI](../../key-vault/general/key-vault-recovery.md)-
-> [!NOTE]
-> It is not possible to delete an encryption scope.
- ## Next steps - [What is Azure Key Vault?](../../key-vault/general/overview.md) - [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md)-- [Encryption scopes for Blob storage (preview)](../blobs/encryption-scope-overview.md)
+- [Encryption scopes for Blob storage](../blobs/encryption-scope-overview.md)
+- [Provide an encryption key on a request to Blob storage](../blobs/encryption-customer-provided-keys.md)
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-netapp-comparison.md
Most workloads that require cloud file storage work well on either Azure Files o
| Identity-Based Authentication and Authorization | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> Note that identify-based authentication is only supported when using SMB protocol. To learn more, see [FAQ](https://docs.microsoft.com/azure/storage/files/storage-files-faq#security-authentication-and-access-control). | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> NFS/SMB dual protocol<ul><li>ADDS/LDAP integration</li></ul><br>NFSv3/NFSv4.1<ul><li>ADDS/LDAP integration (coming)</li><li>NFS extended groups (coming)</li></ul><br> To learn more, see [FAQ](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-faqs). | | Encryption | SMB<br><ul><li>Encryption at rest (AES 256) with customer or Microsoft-managed keys</li><li>Kerberos encryption using AES 256 or RC4-HMAC</li><li>Encryption in transit</li></ul><br>NFS<br><ul><li>Encryption at rest (AES 256) with customer or Microsoft-managed keys</li></ul><br>REST<br><ul><li>Encryption at rest (AES 256) with customer or Microsoft-managed keys</li><li>Encryption in transit</li></ul><br> To learn more, see [security](https://docs.microsoft.com/azure/storage/files/storage-files-compare-protocols#security). | All protocols<br><ul><li>Encryption at rest (AES 256) with Microsoft-managed keys </li></ul><br>NFS 4.1<ul><li>Encryption in transit using Kerberos with AES 256</li></ul><br> To learn more, see [security FAQ](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-faqs#security-faqs). | | Access Options | <ul><li>Internet</li><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>Azure File Sync</li></ul><br> To learn more, see [network considerations](https://docs.microsoft.com/azure/storage/files/storage-files-networking-overview). | <ul><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[HPC Cache](https://docs.microsoft.com/azure/hpc-cache/hpc-cache-overview)</li></ul><br> To learn more, see [network considerations](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-network-topologies). |
-| Data Protection | <ul><li>Incremental snapshots</li><li>File/directory user self-restore</li><li>Restore to new location</li><li>In-place revert</li><li>Share-level soft delete</li><li>Azure Backup integration</li></ul><br> To learn more, see [Azure Files enhances data protection capabilities](https://azure.microsoft.com/blog/azure-files-enhances-data-protection-capabilities/). | <ul><li>Snapshots (255/volume)</li><li>File/directory user self-restore</li><li>Restore to new volume</li><li>In-place revert</li><li>[Cross-Region Replication](https://docs.microsoft.com/azure/azure-netapp-files/cross-region-replication-introduction) (public preview)</li><li>Azure NetApp Files Backup (preview)</li></ul><br> To learn more, see [How Azure NetApp Files snapshots work](https://docs.microsoft.com/azure/azure-netapp-files/snapshots-introduction). |
+| Data Protection | <ul><li>Incremental snapshots</li><li>File/directory user self-restore</li><li>Restore to new location</li><li>In-place revert</li><li>Share-level soft delete</li><li>Azure Backup integration</li></ul><br> To learn more, see [Azure Files enhances data protection capabilities](https://azure.microsoft.com/blog/azure-files-enhances-data-protection-capabilities/). | <ul><li>Snapshots (255/volume)</li><li>File/directory user self-restore</li><li>Restore to new volume</li><li>In-place revert</li><li>[Cross-Region Replication](https://docs.microsoft.com/azure/azure-netapp-files/cross-region-replication-introduction) (public preview)</li></ul><br> To learn more, see [How Azure NetApp Files snapshots work](https://docs.microsoft.com/azure/azure-netapp-files/snapshots-introduction). |
| Migration Tools | <ul><li>Azure Data Box</li><li>Azure File Sync</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li></ul><br> To learn more, see [Migrate to Azure file shares](https://docs.microsoft.com/azure/storage/files/storage-files-migration-overview). | <ul><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[CloudSync](https://cloud.netapp.com/cloud-sync-service), [XCP](https://xcp.netapp.com/)</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li><li>Application-based (for example, HSR, Data Guard, AOAG)</li></ul> | | Tiers | <ul><li>Premium</li><li>Transaction Optimized</li><li>Hot</li><li>Cool</li></ul><br> To learn more, see [storage tiers](https://docs.microsoft.com/azure/storage/files/storage-files-planning#storage-tiers). | <ul><li>Ultra</li><li>Premium</li><li>Standard</li></ul><br> All tiers provide sub-ms minimum latency.<br><br> To learn more, see [Service Levels](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-service-levels) and [Performance Considerations](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-performance-considerations). | | Pricing | [Azure Files Pricing](https://azure.microsoft.com/pricing/details/storage/files/) | [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/) |
storsimple Storsimple 8000 Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-8000-migration-options.md
Learn more on [migration to the Cohesity Data Platform](https://info.cohesity.co
Nasuni makes it easy for StorSimple 5000-7000 customers to migrate and keep their data in Azure. Nasuni is a leading Azure-based NAS storage solution, giving customers the performance and security they expect from on-prem solutions, with cloud economics and scale. In addition to high performance file storage, Nasuni and Azure handle backup and DR, while allowing you to share and collaborate on your data around the globe with centralized file storage management.
-Nasuni has the experience to make your migration easy ΓÇô get started today: https://info.nasuni.com/nasuni-storsimple-migration
+Nasuni has the experience to make your migration easy ΓÇô get started today: https://www.nasuni.com/blog-migrating-off-storsimple/
#### Migrate to Talon FAST
A. The End of Support date for StorSimple 8000 series is published [here](https:
## Next steps - [Migrate data from a StorSimple 5000-7000 series to an 8000 series device](storsimple-8000-migrate-from-5000-7000.md).
+ - [Migrate data from a StorSimple 5000-7000 series to Azure File Sync](../storage/files/storage-files-migration-storsimple-8000.md)
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-overview.md
Following is a list of other software used with StorSimple to build solutions.
| Backup target |Veeam |Veeam v 9 and later |[StorSimple as a backup target with Veaam](storsimple-configure-backup-target-veeam.md)| | Backup target |Veritas Backup Exec |Backup Exec 16 and later |[StorSimple as a backup target with Backup Exec](storsimple-configure-backup-target-using-backup-exec.md)| | Backup target |Veritas NetBackup |NetBackup 7.7.x and later |[StorSimple as a backup target with NetBackup](storsimple-configure-backuptarget-netbackup.md)|
-| Global File Sharing <br></br> Collaboration |Talon |[StorSimple with Talon](https://www.talonstorage.com/products/archive/fast-deployment-azure-storsimple) | |
+| Global File Sharing <br></br> Collaboration |Talon |[StorSimple with Talon](https://www.theinfostride.com/talon-and-microsoft-to-host-azure-storsimple-web-conference-with-capita/) | |
## StorSimple terminology Before deploying your Microsoft Azure StorSimple solution, we recommend that you review the following terms and definitions.
Before deploying your Microsoft Azure StorSimple solution, we recommend that you
| Windows PowerShell for StorSimple |A Windows PowerShellΓÇôbased command-line interface used to operate and manage your StorSimple device. While maintaining some of the basic capabilities of Windows PowerShell, this interface has additional dedicated cmdlets that are geared towards managing a StorSimple device. | ## Next steps
-Learn about [StorSimple security](storsimple-8000-security.md).
+Learn about [StorSimple security](storsimple-8000-security.md).
stream-analytics Set Up Cicd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/set-up-cicd-pipeline.md
The steps in this article use a Stream Analytics Visual Studio Code project. If
## Create a build pipeline
-In this section, you learn how to create a build pipeline. You can reference this sample [auto build and test pipeline](https://dev.azure.com/ASA-CICD-sample/azure-streamanalytics-cicd-demo/_build) in Azure DevOps.
+In this section, you learn how to create a build pipeline.
1. Open a web browser and navigate to your project in Azure DevOps.
-1. Under **Pipelines** in the left navigation menu, select **Builds**. Then, select **New pipeline**.
+2. Under **Pipelines** in the left navigation menu, select **Builds**. Then, select **New pipeline**.
:::image type="content" source="media/set-up-cicd-pipeline/new-pipeline.png" alt-text="Create new Azure Pipeline":::
-1. Select **Use the classic editor** to create a pipeline without YAML.
+3. Select **Use the classic editor** to create a pipeline without YAML.
-1. Select your source type, team project, and repository. Then, select **Continue**.
+4. Select your source type, team project, and repository. Then, select **Continue**.
:::image type="content" source="media/set-up-cicd-pipeline/select-repo.png" alt-text="Select Azure Stream Analytics project":::
-1. On the **Choose a template** page, select **Empty job**.
+5. On the **Choose a template** page, select **Empty job**.
## Install npm package
The test summary file and Azure Resource Manager Template files can be found in
## Release with Azure Pipelines
-In this section, you learn how to create a release pipeline. You can reference this sample [release pipeline](https://dev.azure.com/ASA-CICD-sample/azure-streamanalytics-cicd-demo/_release?_a=releases&view=mine&definitionId=2) in Azure DevOps.
+In this section, you learn how to create a release pipeline.
Open a web browser and navigate to your Azure Stream Analytics Visual Studio Code project.
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
For additional assistance with completing this migration scenario, please see th
| [Getting table sizes in Azure Synapse Analytics dedicated SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform execute is to get metrics about a new environment post-migration: collecting load times from on-premises to the cloud, collecting PolyBase load times, etc. Of these tasks, one of the most important is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. | | [Utility to move On-Premises SQL Server Logins to Azure Synapse Analytics](https://github.com/Microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins) | A PowerShell script that creates a T-SQL command script to re-create logins and select database users from an ΓÇ£on premisesΓÇ¥ SQL Server to an Azure SQL PaaS service. The tool allows the automatic mapping of Windows AD accounts to Azure AD accounts or it can do UPN lookups for each login against the on premises Windows Active Directory. The tool optionally moves SQL Server native logins as well. Custom server and database roles are scripted, as well as role membership and database role and user permissions. Contained databases are yet not supported and only a subset of possible SQL Server permissions are scripted; i.e. permissions grant with grant are not supported (complex permission trees). More details are available in the support document and the script have comments for ease of understanding. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Videos - Watch how [Walgreens migrated their retail inventory system](https://www.youtube.com/watch?v=86dhd8N1lH4) with about 100TB of data from Netezza to Azure Synapse Analytics (formerly SQL DW) in record time.
synapse-analytics Sql Data Warehouse Partner Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| Partner | Description | Website/Product link | | - | -- | -- | | ![Ab Initio](./media/sql-data-warehouse-partner-data-integration/abinitio-logo.png) |**Ab Initio**<br> Ab InitioΓÇÖs agile digital engineering platform helps you solve the toughest data processing and data management problems in corporate computing. Ab InitioΓÇÖs cloud-native platform lets you access and use data anywhere in your corporate ecosystem, whether in Azure or on-premises, including data stored on legacy systems. The combination of an intuitive interface with powerful automation, data quality, data governance, and active metadata capabilities enables rapid development and true data self-service, freeing analysts to do their jobs quickly and effectively. Join the worldΓÇÖs largest businesses in using Ab Initio to turn big data into meaningful data. |[Product page](https://www.abinitio.com/) |
+| ![Aecorsoft](./media/sql-data-warehouse-partner-data-integration/aecorsoft-logo.png) |**Aecorsoft**<br> AecorSoft offers fast, scalable, and real-time ELT/ETL software solution to help SAP customers bring complex SAP data to Azure Synapse Analytics and Azure data platform. With full compliance with SAP application layer security, AecorSoft solution is officially SAP Premium Certified to integrate with SAP applications. AecorSoftΓÇÖs unique Super Delta and Change-Data-Capture features enable SAP users to stream delta data from SAP transparent, pool, and cluster tables to Azure in CSV, Parquet, Avro, ORC, or GZIP format. Besides SAP tabular data, many other business-rule-heavy SAP objects like BW queries and S/4HANA CDS Views are fully supported. |[Product page](https://www.aecorsoft.com/products/dataintegrator)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aecorsoftinc1588038796343.aecorsoftintegrationservice_adf)<br>|
| ![Alooma](./media/sql-data-warehouse-partner-data-integration/alooma_logo.png) |**Alooma**<br> Alooma is an Extract, Transform, and Load (ETL) solution that enables data teams to integrate, enrich, and stream data from various data silos to an Azure Synapse data warehouse all in real time. |[Product page](https://www.alooma.com/) | | ![Alteryx](./media/sql-data-warehouse-partner-data-integration/alteryx_logo.png) |**Alteryx**<br> Alteryx Designer provides a repeatable workflow for self-service data analytics that leads to deeper insights in hours, not the weeks typical of traditional approaches! Alteryx Designer helps data analysts by combining data preparation, data blending, and analytics ΓÇô predictive, statistical, and spatial ΓÇô using the same intuitive user interface. |[Product page](https://www.alteryx.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/alteryx.alteryx-designer)<br>| | ![Attunity](./media/sql-data-warehouse-partner-data-integration/attunity_logo.png) |**Attunity (CloudBeam)**<br>Attunity CloudBeam provides an automated solution for loading data into an Azure Synapse data warehouse. It simplifies batch loading and incremental replication of data from many sources - SQL Server, Oracle, DB2, Sybase, MySQL, and more. |[Product page](http://www.attunity.com/attunity-cloudbeam-for-azure/)<br>[Azure Marketplace](https://aws.amazon.com/marketplace/pp/Attunity-Attunity-CloudBeam/B00B5PB8IM) <br> |
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-openrowset.md
Specifies encoding: char is used for UTF8, widechar is used for UTF16 files.
CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' }
-Specifies the code page of the data in the data file. The default value is 65001 (UTF-8 encoding). See more details about this option [here](/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver15#codepage).
+Specifies the code page of the data in the data file. The default value is 65001 (UTF-8 encoding). See more details about this option [here](/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver15&preserve-view=true#codepage).
## Fast delimited text parsing
The following example reads CSV file that contains header row without specifying
```sql SELECT
- *
+ *
FROM OPENROWSET( BULK 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.csv', FORMAT = 'CSV', PARSER_VERSION = '2.0',
- HEADER_ROW = TRUE) as [r]
+ HEADER_ROW = TRUE) as [r]
``` The following example reads CSV file that doesn't contain header row without specifying column names and data types: ```sql SELECT
- *
+ *
FROM OPENROWSET( BULK 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.csv', FORMAT = 'CSV',
The following example returns only two columns with ordinal numbers 1 and 4 from
```sql SELECT
- *
+ *
FROM OPENROWSET( BULK 'https://sqlondemandstorage.blob.core.windows.net/csv/population/population*.csv', FORMAT = 'CSV',
FROM
FORMAT='PARQUET' ) WITH (
- [stateName] VARCHAR (50),
- [population] bigint
+ [stateName] VARCHAR (50),
+ [population] bigint
) AS [r] ```
FROM
FORMAT='PARQUET' ) WITH (
- --lax path mode samples
- [stateName] VARCHAR (50), -- this one works as column name casing is valid - it targets the same column as the next one
- [stateName_explicit_path] VARCHAR (50) '$.stateName', -- this one works as column name casing is valid
- [COUNTYNAME] VARCHAR (50), -- STATEname column will contain NULLs only because of wrong casing - it targets the same column as the next one
- [countyName_explicit_path] VARCHAR (50) '$.COUNTYNAME', -- STATEname column will contain NULLS only because of wrong casing and default path mode being lax
-
- --strict path mode samples
- [population] bigint 'strict $.population' -- this one works as column name casing is valid
- --,[population2] bigint 'strict $.POPULATION' -- this one fails because of wrong casing and strict path mode
+ --lax path mode samples
+ [stateName] VARCHAR (50), -- this one works as column name casing is valid - it targets the same column as the next one
+ [stateName_explicit_path] VARCHAR (50) '$.stateName', -- this one works as column name casing is valid
+ [COUNTYNAME] VARCHAR (50), -- STATEname column will contain NULLs only because of wrong casing - it targets the same column as the next one
+ [countyName_explicit_path] VARCHAR (50) '$.COUNTYNAME', -- STATEname column will contain NULLS only because of wrong casing and default path mode being lax
+
+ --strict path mode samples
+ [population] bigint 'strict $.population' -- this one works as column name casing is valid
+ --,[population2] bigint 'strict $.POPULATION' -- this one fails because of wrong casing and strict path mode
) AS [r] ```
virtual-desktop Azure Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-monitor-costs.md
+
+ Title: Monitor Windows Virtual Desktop cost pricing estimates - Azure
+description: How to estimate costs and pricing for using Azure Monitor for Windows Virtual Desktop.
++ Last updated : 03/29/2021++++
+# Estimate Azure Monitor costs
+
+Azure Monitor Logs is a service that collects, indexes, and stores data generated by your environment. Because of this, the Azure Monitor pricing model is based on the amount of data that's brought into and processed (or "ingested") by your Log Analytics workspace in gigabytes per day. The cost of a Log Analytics workspace isn't only based on the volume of data collected, but also which Azure payment plan you've selected and how long you choose to store the data your environment generates.
+
+This article will explain the following things to help you understand how pricing in Azure Monitor works:
+
+- How to estimate data ingestion and storage costs upfront before you enable this feature
+- How to measure and control your ingestion and storage to reduce costs when using this feature
+
+>[!NOTE]
+> All sizes and pricing listed in this article are just examples to demonstrate how estimation works. For a more accurate assessment based on your Azure Monitor Log Analytics pricing model and Azure region, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Estimate data ingestion and storage costs
+
+We recommend you use a predefined set of data written as logs in your Log Analytics workspace. In the following example estimates, we'll look at billable data in the default configuration
+
+The predefined datasets for Azure Monitor for Windows Virtual Desktop include:
+
+- Performance counters from the session hosts
+- Windows Event Logs from the session hosts
+- Windows Virtual Desktop diagnostics from the service infrastructure
+
+Your data ingestion and storage costs depend on your environment size, health, and usage. The example estimates we'll use in this article to calculate the cost ranges you can expect are based on healthy virtual machines running light to power usage, based on our [virtual machine sizing guidelines](/remote/remote-desktop-services/virtual-machine-recs), to calculate a range of data ingestion and storage costs you could expect.
+
+The light usage VM we'll be using in our example includes the following components:
+
+- 4 vCPUs, 1 disk
+- 16 sessions per day
+- An average session duration of 2 hours (120 minutes)
+- 100 processes per session
+
+The power usage VM we'll be using in our example includes the following components:
+
+- 6 vCPUs, 1 disk
+- 6 sessions per day
+- Average session duration of 4 hours (240 minutes)
+- 200 processes per session
+
+## Estimating performance counter ingestion
+
+Performance counters show how the system resources are performing. Performance counter data ingestion depends on your environment size and usage. In most cases, performance counters should make up 80 to 99% of your data ingestion for Azure Monitor for Windows Virtual Desktop.
+
+Before you start estimating, itΓÇÖs important that you understand that each performance counter sends data at a specific frequency. We set a default sample rate-per-minute (you can also edit this rate in your settings), but that rate will be applied at different multiplying factors depending on the counter. The following factors affect the rate:
+
+- For the per virtual machine (VM) factor, each counter sends data per VM in your environment at the default sample rate per minute while the VM is running. You can estimate the number of records these counters send per day by multiplying the default sample rate per minute by the number of VMs in your environment, then multiplying that number by the average VM running time per day.
+
+ To summarize:
+
+ Default sample rate per minute × number of CPU cores in the VM SKU × number of VMs × average VM running time per day = number of records sent per day
+
+- For the per CPU factor, each counter sends at the default sample rate per minute per vCPU in each VM in your environment while the VM is running. You can estimate the number of records the counters will send per day by multiplying the default sample rate per minute by the number of CPU cores in the VM SKU, then multiplying that number by the number of minutes the VM runs and the number of VMs in your environment.
+
+ To summarize:
+
+ Default sample rate per minute × number of CPU cores in the VM SKU × number of minutes the VM runs × number of VMs = number of records sent per day
+
+- For the per disk factor, each counter sends data at the default sample rate for each disk in each VM in your environment. The number of records these counters will send per day equals the default sample rate per minute multiplied by number of disks in the VM SKU, multiplied by 60 minutes per hour, and finally multiplied by the average active hours for a VM.
+
+ To summarize:
+
+ Default sample rate per minute × number of disks in VM SKU × 60 minutes per hour × number of VMs × average VM running time per day = number of records sent per day
+
+- For the per session factor, each counter sends data at the default sample rate for each session in your environment while the session is connected. You can estimate the number of records these counters will send per day can by multiplying the default sample rate per minute by the average number of sessions per day and the average session duration.
+
+ To summarize:
+
+ Default sample rate per minute × sessions per day × average session duration = number of records sent per day
+
+- For the per-process factor, each counter sends data at the default rate for each process in each session in your environment. You can estimate the number of records these counters will send per day by multiplying the default sample rate per minute by the average number of sessions per day, then multiplying that by the average session duration and the average number of processes per session.
+
+ To summarize:
+
+ Default sample rate per minute × sessions per day × average session duration × average number of processes per session = number of records sent per day
+
+The following table lists the 20 performance counters Azure Monitor for Windows Virtual Desktop collects and their default rates:
+
+| Counter name | Default sample rate | Frequency factor |
+|--|||
+| Logical Disk(C:)\\% free space | 60 seconds | Per disk |
+| Logical Disk(C:)\\Avg. Disk Queue Length | 30 seconds | Per disk |
+| Logical Disk(C:)\\Avg. Disk sec/Transfer | 60 seconds | Per disk |
+| Logical Disk(C:)\\Current Disk Queue Length | 30 seconds | Per disk |
+| Memory(\*)\\Available Mbytes | 30 seconds | Per VM |
+| Memory(\*)\\Page Faults/sec | 30 seconds | Per VM |
+| Memory(\*)\\Pages/sec | 30 seconds | Per VM |
+| Memory(\*)\\% Committed Bytes in Use | 30 seconds | Per VM |
+| PhysicalDisk(\*)\\Avg. Disk Queue Length | 30 seconds | Per disk |
+| PhysicalDisk(\*)\\Avg. Disk sec/Read | 30 seconds | Per disk |
+| PhysicalDisk(\*)\\Avg. Disk sec/Transfer | 30 seconds | Per disk |
+| PhysicalDisk(\*)\\Avg. Disk sec/Write | 30 seconds | Per disk |
+| Processor Information(_Total)\\% Processor Time | 30 seconds | Per core/CPU |
+| Terminal Services(\*)\\Active Sessions | 60 seconds | Per VM |
+| Terminal Services(\*)\\Inactive Sessions | 60 seconds | Per VM |
+| Terminal Services(\*)\\Total Sessions | 60 seconds | Per VM |
+| User Input Delay per Process(\*)\\Max Input Delay | 30 seconds | Per process |
+| User Input Delay per Session(\*)\\Max Input Delay | 30 seconds | Per session |
+| RemoteFX Network(\*)\\Current TCP RTT | 30 seconds | Per VM |
+| RemoteFX Network(\*)\\Current UDP Bandwidth | 30 seconds | Per VM |
+
+If we estimate each record size to be 200 bytes, an example VM running a light workload on the default sample rate would send roughly 90 megabytes of performance counter data per day per VM. Meanwhile, an example VM running a power workload would send roughly 130 megabytes of performance counter data per day per VM. However, record size and environment usage can vary, so the megabytes per day your deployment uses may be different.
+
+To learn more about input delay performance counters, see [User Input Delay performance counters](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters/).
+
+## Estimating Windows Event Log ingestion
+
+Windows Event Logs are data sources collected by Log Analytics agents on Windows virtual machines. You can collect events from standard logs like System and Application as well as custom logs created by applications you need to monitor.
+
+These are the default Windows Events for Azure Monitor for Windows Virtual Desktop:
+
+- Application
+- Microsoft-Windows-TerminalServices-RemoteConnectionManager/Admin
+- Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
+- System
+- Microsoft-FSLogix-Apps/Operational
+- Microsoft-FSLogix-Apps/Admin
+
+Windows Events send whenever the terms of the event are met in the environment. Machines in healthy states will send fewer events than machines in unhealthy states. Since event count is unpredictable, we use a range of 1,000 to 10,000 events per VM per day based on examples from healthy environments for this estimate. For example, if we estimate each event record size in this example to be 1,500 bytes, this comes out to roughly 2 to 15 megabytes of event data per day for the specified environment.
+
+To learn more about Windows events, see [Windows event records properties](../azure-monitor/agents/data-sources-windows-events.md).
+
+## Estimating diagnostics ingestion
+
+The diagnostics service creates activity logs for both user and administrative actions.
+
+These are the names of the activity logs the diagnostic counter tracks:
+
+- WVDCheckpoints
+- WVDConnections
+- WVDErrors
+- WVDFeeds
+- WVDManagement
+- WVDAgentHealthStatus
+
+The service sends diagnostic information whenever the environment meets the terms required to make a record. Since diagnostic record count is unpredictable, we use a range of 500 to 1000 events per VM per day based on examples from healthy environments for this estimate.
+
+For example, if we estimate each diagnostic record size in this example to be 200 bytes, then the total ingested data would be less than 1 MB per VM per day.
+
+To learn more about the activity log categories, see [Windows Virtual Desktop diagnostics](diagnostics-log-analytics.md).
+
+## Estimating total costs
+
+Finally, let's estimate the total cost. In this example, let's say we come up with the following results based on the example values in the previous sections:
+
+| Data source | Size estimate per day (in megabytes) |
+|-||
+| Performance counters | 90-130 |
+| Events | 2-15 |
+| Windows Virtual Desktop diagnostics | \< 1 |
+
+In this example, the total ingested data for Azure Monitor for Windows Virtual Desktop is between 92 to 145 megabytes per VM per day. In other words, every 31 days, each VM ingests roughly 3 to 5 gigabytes of data.
+
+Using the default Pay-as-you-go model for [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/), you can estimate the Azure Monitor data collection and storage cost per month. Depending on your data ingestion, you may also consider the Capacity Reservation model for Log Analytics pricing.
+
+## Manage your data ingestion to reduce costs
+
+This section will explain how to measure and manage data ingestion to reduce costs.
+
+To learn about managing rights and permissions to the workbook, see [Access control](../azure-monitor/visualize/workbooks-access-control.md).
+
+>[!NOTE]
+>Removing data points will impact their corresponding visuals in Azure Monitor for Windows Virtual Desktop.
+
+### Log Analytics settings
+
+Here are some suggestions to optimize your Log Analytics settings to manage data ingestion:
+
+- Use a designated Log Analytics workspace for your Windows Virtual Desktop resources to ensure that Log Analytics only collects performance counters and events for the virtual machines in your Windows Virtual Desktop deployment.
+- Adjust your Log Analytics storage settings to manage costs. You can reduce the retention period, evaluate whether a fixed storage pricing tier would be more cost-effective, or set boundaries on how much data you can ingest to limit impact of an unhealthy deployment. To learn more, see [Manage usage and costs for Azure Monitor Logs](../azure-monitor/platform/manage-cost-storage.md).
+
+### Remove excess data
+
+Our default configuration is the only set of data we recommend for Azure Monitor for Windows Virtual Desktop. You always have the option to add additional data points and view them in the Host Diagnostics: Host browser or build custom charts for them, however added data will increase your Log Analytics cost. These can be removed for cost savings.
+
+### Measure and manage your performance counter data
+
+Your true monitoring costs will depend on your environment size, usage, and health. To understand how to measure data ingestion in your Log Analytics workspace, see [Understanding ingested log data volume](../azure-monitor/logs/manage-cost-storage.md#understanding-ingested-data-volume).
+
+The performance counters the session hosts use will probably be your largest source of ingested data for Azure Monitor for Windows Virtual Desktop. The following custom query template for a Log Analytics workspace can track frequency and megabytes ingested per performance counter over the last day:
+
+```azure
+let WVDHosts = dynamic(['Host1.MyCompany.com', 'Host2.MyCompany.com']);
+Perf
+| where TimeGenerated > ago(1d)
+| where Computer in (WVDHosts)
+| extend PerfCounter = strcat(ObjectName, ":", CounterName)
+| summarize Records = count(TimeGenerated), InstanceNames = dcount(InstanceName), Bytes=sum(_BilledSize) by PerfCounter
+| extend Billed_MBytes = Bytes / (1024 * 1024), BytesPerRecord = Bytes / Records
+| sort by Records desc
+```
+
+>[!NOTE]
+>Make sure to replace the template's placeholder values with the values your environment uses, otherwise the query won't work.
+
+This query will show all performance counters you have enabled on the environment, not just the default ones for Azure Monitor for Windows Virtual Desktop. This information can help you understand which areas to target to reduce costs, like reducing a counterΓÇÖs frequency or removing it altogether.
+
+You can also reduce costs by removing performance counters. To learn how to remove performance counters or edit existing counters to reduce their frequency, see [Configuring performance counters](../azure-monitor/platform/data-sources-performance-counters.md#configuring-performance-counters).
+
+### Manage Windows Event Logs
+
+Windows Events are unlikely to cause a spike in data ingestion when all hosts are healthy. An unhealthy host can increase the number of events sent to the log, but the information can be critical to fixing the host's issues. We recommend keeping them. To learn more about how to manage Windows Event Logs, see [Configuring Windows Event logs](../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs).
+
+### Manage diagnostics
+
+Windows Virtual Desktop diagnostics should make up less than 1% of your data storage costs, so we don't recommend removing them. To manage Windows Virtual Desktop diagnostics, [Use Log Analytics for the diagnostics feature](diagnostics-log-analytics.md).
+
+## Next steps
+
+Learn more about Azure Monitor for Windows Virtual Desktop at these articles:
+
+- [Use Azure Monitor for Windows Virtual Desktop to monitor your deployment](azure-monitor.md).
+- Use the [glossary](azure-monitor-glossary.md) to learn more about terms and concepts.
+- If you encounter a problem, check out our [troubleshooting guide](troubleshoot-azure-monitor.md) for help.
+- Check out [Monitoring usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) to learn more about managing your monitoring costs.
virtual-desktop Azure Monitor Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-monitor-glossary.md
Title: Monitor Windows Virtual Desktop preview glossary - Azure
+ Title: Monitor Windows Virtual Desktop glossary - Azure
description: A glossary of terms and concepts related to Azure Monitor for Windows Virtual Desktop. Previously updated : 3/25/2020 Last updated : 03/29/2021
-# Azure Monitor for Windows Virtual Desktop (preview) glossary
-
->[!IMPORTANT]
->Azure Monitor for Windows Virtual Desktop is currently in public preview. This preview version is provided without a service level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Azure Monitor for Windows Virtual Desktop glossary
This article lists and briefly describes key terms and concepts related to Azure Monitor for Windows Virtual Desktop (preview).
virtual-desktop Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-monitor.md
Title: Use Monitor Windows Virtual Desktop Monitor preview - Azure
+ Title: Use Monitor Windows Virtual Desktop Monitor - Azure
description: How to use Azure Monitor for Windows Virtual Desktop. Previously updated : 03/25/2020 Last updated : 03/29/2021
-# Use Azure Monitor for Windows Virtual Desktop to monitor your deployment (preview)
+# Use Azure Monitor for Windows Virtual Desktop to monitor your deployment
->[!IMPORTANT]
->Azure Monitor for Windows Virtual Desktop is currently in public preview. This preview version is provided without a service level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Azure Monitor for Windows Virtual Desktop (preview) is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Windows Virtual Desktop environments. This article will walk you through how to set up Azure Monitor for Windows Virtual Desktop to monitor your Windows Virtual Desktop environments.
+Azure Monitor for Windows Virtual Desktop is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Windows Virtual Desktop environments. This topic will walk you through how to set up Azure Monitor for Windows Virtual Desktop to monitor your Windows Virtual Desktop environments.
## Requirements
virtual-desktop Configure Vm Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/configure-vm-gpu.md
Follow the instructions in this article to create a GPU optimized Azure virtual
## Select an appropriate GPU optimized Azure virtual machine size
-Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), or [NVv4-series](../virtual-machines/nvv4-series.md) VM sizes. These are tailored for app and desktop virtualization and enable apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality.
+Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), or [NVv4-series](../virtual-machines/nvv4-series.md) VM sizes. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality.
>[!NOTE]
->Azure's NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Windows Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. General app and desktop acceleration with NVIDIA GPUs requires NVIDIA GRID licensing; this is provided by Azure for the recommended VM sizes but needs to be arranged separately for NC/ND-series VMs.
+>Azure's NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Windows Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They do not support GPU acceleration for most apps or the Windows user interface.
## Create a host pool, provision your virtual machine, and configure an app group
You must also configure an app group, or use the default desktop app group (name
## Install supported graphics drivers in your virtual machine
-To take advantage of the GPU capabilities of Azure N-series VMs in Windows Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) to install drivers from the appropriate graphics vendor, either manually or using an Azure VM extension.
+To take advantage of the GPU capabilities of Azure N-series VMs in Windows Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) to install drivers. Only drivers distributed by Azure are supported.
-Only drivers distributed by Azure are supported for Windows Virtual Desktop. For Azure NV-series VMs with NVIDIA GPUs, only [NVIDIA GRID drivers](../virtual-machines/windows/n-series-driver-setup.md#nvidia-grid-drivers), and not NVIDIA Tesla (CUDA) drivers, support GPU acceleration for general-purpose apps and desktops.
+* For Azure NV-series or NVv3-series VMs, only NVIDIA GRID drivers, and not NVIDIA CUDA drivers, support GPU acceleration for most apps and the Windows user interface. If you choose to install drivers manually, be sure to install GRID drivers. If you choose to install drivers using the Azure VM extension, GRID drivers will automatically be installed for these VM sizes.
+* For Azure NVv4-series VMs, install the AMD drivers provided by Azure. You may install them automatically using the Azure VM extension, or you may install them manually.
After driver installation, a VM restart is required. Use the verification steps in the above instructions to confirm that graphics drivers were successfully installed.
virtual-desktop Troubleshoot Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-azure-monitor.md
Title: Troubleshoot Monitor Windows Virtual Desktop preview - Azure
+ Title: Troubleshoot Monitor Windows Virtual Desktop - Azure
description: How to troubleshoot issues with Azure Monitor for Windows Virtual Desktop. Previously updated : 03/25/2020 Last updated : 03/29/2021
-# Troubleshoot Azure Monitor for Windows Virtual Desktop (preview)
+# Troubleshoot Azure Monitor for Windows Virtual Desktop
->[!IMPORTANT]
->Azure Monitor for Windows Virtual Desktop is currently in public preview. This preview version is provided without a service level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-This article presents known issues and solutions for common problems in Azure Monitor for Windows Virtual Desktop (preview).
+This article presents known issues and solutions for common problems in Azure Monitor for Windows Virtual Desktop.
## Issues with configuration and setup
virtual-machine-scale-sets Virtual Machine Scale Sets Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.md
If you create a VM and then update your secret in the key vault, the new certifi
To deploy .cer public keys to a virtual machine scale set, you can generate a .pfx file that contains only .cer files. To do this, use `X509ContentType = Pfx`. For example, load the .cer file as an x509Certificate2 object in C# or PowerShell, and then call the method.
-For more information, see [X509Certificate.Export Method (X509ContentType,ΓÇéString)](/dotnet/api/system.security.cryptography.x509certificates.x509certificate.export?view=netcore-3.1#system_security_cryptography_x509certificates_x509certificate_export_system_security_cryptography_x509certificates_x509contenttype_system_string_).
+For more information, see [X509Certificate.Export Method (X509ContentType,ΓÇéString)](/dotnet/api/system.security.cryptography.x509certificates.x509certificate.export?#system_security_cryptography_x509certificates_x509certificate_export_system_security_cryptography_x509certificates_x509contenttype_system_string_).
### How do I pass in certificates as base64 strings?
If you have two virtual machine scale sets with Azure Load Balancer front-ends,
IP addresses are selected from a subnet that you specify.
-The allocation method of virtual machine scale set IP addresses is always ΓÇ£dynamic,ΓÇ¥ but that doesn't mean that these IP addresses can change. In this case, "dynamic" only means that you do not specify the IP address in a PUT request. Specify the static set by using the subnet.
+The allocation method of virtual machine scale set IP addresses is always "dynamic," but that doesn't mean that these IP addresses can change. In this case, "dynamic" only means that you do not specify the IP address in a PUT request. Specify the static set by using the subnet.
### How do I deploy a virtual machine scale set to an existing Azure virtual network?
Yes. You can add the resource IDs for multiple Application Gateway backend addre
### In what case would I create a virtual machine scale set with fewer than two VMs?
-One reason to create a virtual machine scale set with fewer than two VMs would be to use the elastic properties of a virtual machine scale set. For example, you could deploy a virtual machine scale set with zero VMs to define your infrastructure without paying VM running costs. Then, when you are ready to deploy VMs, increase the ΓÇ£capacityΓÇ¥ of the virtual machine scale set to the production instance count.
+One reason to create a virtual machine scale set with fewer than two VMs would be to use the elastic properties of a virtual machine scale set. For example, you could deploy a virtual machine scale set with zero VMs to define your infrastructure without paying VM running costs. Then, when you are ready to deploy VMs, increase the "capacity" of the virtual machine scale set to the production instance count.
Another reason you might create a virtual machine scale set with fewer than two VMs is if you're concerned less with availability than in using an availability set with discrete VMs. Virtual machine scale sets give you a way to work with undifferentiated compute units that are fungible. This uniformity is a key differentiator for virtual machine scale sets versus availability sets. Many stateless workloads do not track individual units. If the workload drops, you can scale down to one compute unit, and then scale up to many when the workload increases.
You can set this property to **false**. For small virtual machine scale sets, th
### What is the difference between deleting a VM in a virtual machine scale set and deallocating the VM? When should I choose one over the other?
-The main difference between deleting a VM in a virtual machine scale set and deallocating the VM is that `deallocate` doesnΓÇÖt delete the virtual hard disks (VHDs). There are storage costs associated with running `stop deallocate`. You might use one or the other for one of the following reasons:
+The main difference between deleting a VM in a virtual machine scale set and deallocating the VM is that `deallocate` doesn't delete the virtual hard disks (VHDs). There are storage costs associated with running `stop deallocate`. You might use one or the other for one of the following reasons:
- You want to stop paying compute costs, but you want to keep the disk state of the VMs. - You want to start a set of VMs more quickly than you could scale out a virtual machine scale set.
virtual-machines Sizes Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-gpu.md
GPU optimized VM sizes are specialized virtual machines available with single, m
To take advantage of the GPU capabilities of Azure N-series VMs, NVIDIA or AMD GPU drivers must be installed. -- For VMs backed by NVIDIA GPUs, the [NVIDIA GPU Driver Extension](./extensions/hpccompute-gpu-windows.md) installs appropriate NVIDIA CUDA or GRID drivers. Install or manage the extension using the Azure portal or tools such as Azure PowerShell or Azure Resource Manager templates. See the [NVIDIA GPU Driver Extension documentation](./extensions/hpccompute-gpu-windows.md) for supported operating systems and deployment steps. For general information about VM extensions, see [Azure virtual machine extensions and features](./extensions/overview.md).
+- For VMs backed by NVIDIA GPUs, the [NVIDIA GPU Driver Extension](./extensions/hpccompute-gpu-windows.md) installs appropriate NVIDIA CUDA or GRID drivers. Install or manage the extension using the Azure portal or tools such as Azure PowerShell or Azure Resource Manager templates. See the [NVIDIA GPU Driver Extension documentation](./extensions/hpccompute-gpu-windows.md) for supported operating systems and deployment steps. For general information about VM extensions, see [Azure virtual machine extensions and features](./extensions/overview.md).
Alternatively, you may install NVIDIA GPU drivers manually. See [Install NVIDIA GPU drivers on N-series VMs running Windows](./windows/n-series-driver-setup.md) or [Install NVIDIA GPU drivers on N-series VMs running Linux](./linux/n-series-driver-setup.md) for supported operating systems, drivers, installation, and verification steps. -- For VMs backed by AMD GPUs, see [Install AMD GPU drivers on N-series VMs running Windows](./windows/n-series-amd-driver-setup.md) for supported operating systems, drivers, installation, and verification steps.
+- For VMs backed by AMD GPUs, the [AMD GPU driver extension](./extensions/hpccompute-amd-gpu-windows.md) installs appropriate AMD drivers. Install or manage the extension using the Azure portal or tools such as Azure PowerShell or Azure Resource Manager templates. For general information about VM extensions, see [Azure virtual machine extensions and features](./extensions/overview.md).
+
+ Alternatively, you may install AMD GPU drivers manually. See [Install AMD GPU drivers on N-series VMs running Windows](./windows/n-series-amd-driver-setup.md) for supported operating systems, drivers, installation, and verification steps.
## Deployment considerations
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
For related information, see KB article **860340.1** at <https://support.oracle.
- **Dynamic clustering and load balancing limitations.** Suppose you want to use a dynamic cluster in Oracle WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This can be done as long as you use a fixed port number for each of the managed servers (not dynamically assigned from a range) and do not start more managed servers than there are machines the administrator is tracking. That is, there is no more than one managed server per virtual machine). If your configuration results in more Oracle WebLogic Servers being started than there are virtual machines (that is, where multiple Oracle WebLogic Server instances share the same virtual machine), then it is not possible for more than one of those instances of Oracle WebLogic Servers to bind to a given port number. The others on that virtual machine fail. If you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing is not possible because Azure does not support mapping from a single public port to multiple private ports, as would be required for this configuration.-- **Multiple instances of Oracle WebLogic Server on a virtual machine.** Depending on your deploymentΓÇÖs requirements, you might consider running multiple instances of Oracle WebLogic Server on the same virtual machine, if the virtual machine is large enough. For example, on a medium size virtual machine, which contains two cores, you could choose to run two instances of Oracle WebLogic Server. However, we still recommend that you avoid introducing single points of failure into your architecture, which would be the case if you used just one virtual machine that is running multiple instances of Oracle WebLogic Server. Using at least two virtual machines could be a better approach, and each virtual machine could then run multiple instances of Oracle WebLogic Server. Each instance of Oracle WebLogic Server could still be part of the same cluster. However, it is currently not possible to use Azure to load-balance endpoints that are exposed by such Oracle WebLogic Server deployments within the same virtual machine, because Azure load balancer requires the load-balanced servers to be distributed among unique virtual machines.
+- **Multiple instances of Oracle WebLogic Server on a virtual machine.** Depending on your deploymentΓÇÖs requirements, you might consider running multiple instances of Oracle WebLogic Server on the same virtual machine, if the virtual machine is large enough. For example, on a midsize virtual machine, which contains two cores, you could choose to run two instances of Oracle WebLogic Server. However, we still recommend that you avoid introducing single points of failure into your architecture, which would be the case if you used just one virtual machine that is running multiple instances of Oracle WebLogic Server. Using at least two virtual machines could be a better approach, and each virtual machine could then run multiple instances of Oracle WebLogic Server. Each instance of Oracle WebLogic Server could still be part of the same cluster. However, it is currently not possible to use Azure to load-balance endpoints that are exposed by such Oracle WebLogic Server deployments within the same virtual machine, because Azure load balancer requires the load-balanced servers to be distributed among unique virtual machines.
## Oracle JDK virtual machine images
virtual-machines Ha Setup With Stonith https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/ha-setup-with-stonith.md
After the preceding fix, node2 should get added to the cluster
## 10. General Documentation You can find more information on SUSE HA setup in the following articles: -- [SAP HANA SR Performance Optimized Scenario](https://www.suse.com/docrep/documents/ir8w88iwu7/suse_linux_enterprise_server_for_sap_applications_12_sp1.pdf )
+- [SAP HANA SR Performance Optimized Scenario](https://www.suse.com/support/kb/doc/?id=000019450 )
- [Storage-based fencing](https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_storage_protect_fencing.html) - [Blog - Using Pacemaker Cluster for SAP HANA- Part 1](https://blogs.sap.com/2017/11/19/be-prepared-for-using-pacemaker-cluster-for-sap-hana-part-1-basics/) - [Blog - Using Pacemaker Cluster for SAP HANA- Part 2](https://blogs.sap.com/2017/11/19/be-prepared-for-using-pacemaker-cluster-for-sap-hana-part-2-failure-of-both-nodes/)
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-udr-overview.md
ms.devlang: NA
na Previously updated : 10/26/2017 Last updated : 03/26/2021
You can specify the following next hop types when creating a user-defined route:
You cannot specify **VNet peering** or **VirtualNetworkServiceEndpoint** as the next hop type in user-defined routes. Routes with the **VNet peering** or **VirtualNetworkServiceEndpoint** next hop types are only created by Azure, when you configure a virtual network peering, or a service endpoint.
-### Service Tags for user-defined routes (Public Preview)
+### Service Tags for user-defined routes (Preview)
You can now specify a [Service Tag](service-tags-overview.md) as the address prefix for a user-defined route instead of an explicit IP range. A Service Tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to user-defined routes and reducing the number of routes you need to create. You can currently create 25 or less routes with Service Tags in each route table. </br>
+> [!IMPORTANT]
+> Service Tags for user-defined routes is currently in preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
#### Exact Match When there is an exact prefix match between a route with an explicit IP prefix and a route with a Service Tag, preference is given to the route with the explicit prefix. When multiple routes with Service Tags have matching IP prefixes, routes will be evaluated in the following order:
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
Last updated 08/27/2020 --+
+ - contperf-fy21q1
+ - e2e-hybrid
+ # What is VPN Gateway? A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.