Updates from: 05/03/2021 03:04:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/api-connectors-overview.md
Previously updated : 04/27/2021 Last updated : 04/30/2021
Your REST API can be based on any platform and written in any programing languag
The request to your REST API service comes from Azure AD B2C servers. The REST API service must be published to a publicly accessible HTTPS endpoint. The REST API calls will arrive from an Azure data center IP address. + Design your REST API service and its underlying components (such as the database and file system) to be highly available.
active-directory-b2c Https Cipher Tls Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/https-cipher-tls-requirements.md
+
+ Title: TLS and cipher suite requirements - Azure AD B2C
+
+description: Notes for developers on HTTPS cipher suite and TLS requirements when interacting with web API endpoints.
+++++++ Last updated : 04/30/2021+++++
+# Azure Active Directory B2C TLS and cipher suite requirements
+
+Azure Active Directory B2C (Azure AD B2C) connects to your endpoints through [API connectors](api-connectors-overview.md) and [identity providers](oauth2-technical-profile.md) within [user flows](user-flow-overview.md). This article discusses the TLS and cipher suite requirements for your endpoints.
+
+The endpoints configured with API connectors and identity providers must be published to a publicly-accessible HTTPS URI. Before a secure connection is established with the endpoint, the protocol and cipher is negotiated between Azure AD B2C and the endpoint based on the capabilities of both sides of the connection.
+
+Azure AD B2C must be able to connect to your endpoints using the Transport Layer Security (TLS) and cipher suites as described in this article.
+
+## TLS versions
+
+TLS version 1.2 is a cryptographic protocol that provides authentication and data encryption between servers and clients. Your endpoint must support secure communication over **TLS version 1.2**. Older TLS versions 1.0 and 1.1 are deprecated.
+
+## Cipher suites
+
+Cipher suites are sets of cryptographic algorithms. They provide essential information on how to communicate data securely when using the HTTPS protocol through TLS.
+
+Your endpoint must support at least one of the following ciphers:
+
+- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+
+## Endpoints in scope
+
+The following endpoints used in your Azure AD B2C environment must comply with the requirements described in this article:
+
+- [API connectors](api-connectors-overview.md)
+- OAuth1
+ - Token endpoint
+ - User info endpoint
+- OAuth2 and OpenId connect identity providers
+ - OpenId Connect discovery endpoint
+ - OpenId Connect JWKS endpoint
+ - Token endpoint
+ - User info endpoint
+- [ID token hint](id-token-hint.md)
+ - OpenId Connect discovery endpoint
+ - OpenId Connect JWKS endpoint
+- [SAML identity provider](saml-service-provider.md) metadata endpoint
+- [SAML service provider](identity-provider-generic-saml.md) metadata endpoint
+
+## Check your endpoint compatibility
+
+To verify that your endpoints comply with the requirements described in this article, perform a test using a TLS cipher and scanner tool. Test your endpoint using [SSLLABS](https://www.ssllabs.com/ssltest/analyze.html).
++
+## Next steps
+
+See also following articles:
+
+- [Troubleshooting applications that don't support TLS 1.2](../cloud-services/applications-dont-support-tls-1-2.md)
+- [Cipher Suites in TLS/SSL (Schannel SSP)](https://docs.microsoft.com/windows/win32/secauthn/cipher-suites-in-schannel)
+- [How to enable TLS 1.2](https://docs.microsoft.com/mem/configmgr/core/plan-design/security/enable-tls-1-2)
+- [Solving the TLS 1.0 Problem](https://docs.microsoft.com/security/engineering/solving-tls1-problem)
++++
active-directory-b2c Id Token Hint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/id-token-hint.md
Previously updated : 10/16/2020 Last updated : 04/30/2021
The following metadata is relevant when using an asymmetric key.
| issuer | No | Identifies the security token service (token issuer). This value can be used to overwrite the value configured in the metadata, and must be identical to the `iss` claim within the JWT token claim. | | IdTokenAudience | No | Identifies the intended recipient of the token. Must be identical to the `aud` claim within the JWT token claim. | + ## Cryptographic keys When using a symmetric key, the **CryptographicKeys** element contains the following attribute:
The token issuer must provide following endpoints:
* `/.well-known/openid-configuration` - A well-known configuration endpoint with relevant information about the token, such as the token issuer name and the link to the JWK endpoint. * `/.well-known/keys` - the JSON Web Key (JWK) end point with the public key that is used to sign the key (with the private key part of the certificate).
-See the [TokenMetadataController.cs](https://github.com/azure-ad-b2c/id-token-builder/blob/master/source-code/B2CIdTokenBuilder/Controllers/TokenMetadataController.cs) .Net MVC controller sample.
+See the [TokenMetadataController.cs](https://github.com/azure-ad-b2c/id-token-builder/blob/master/source-code/B2CIdTokenBuilder/Controllers/TokenMetadataController.cs) .NET MVC controller sample.
#### Step 1. Prepare a self-signed certificate
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
Previously updated : 04/05/2021 Last updated : 04/30/2021 # Set up sign-up and sign-in with OpenID Connect using Azure Active Directory B2C
-[OpenID Connect](openid-connect.md) is an authentication protocol built on top of OAuth 2.0 that can be used for secure user sign-in. Most identity providers that use this protocol are supported in Azure AD B2C. This article explains how you can add custom OpenID Connect identity providers into your user flows.
+[OpenID Connect](openid-connect.md) is an authentication protocol built on top of OAuth 2.0 that can be used for secure user sign-in. Most identity providers that use this protocol are supported in Azure AD B2C.
+
+This article explains how you can add custom OpenID Connect identity providers into your user flows.
+ ## Add the identity provider
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml.md
Previously updated : 03/08/2021 Last updated : 04/30/2021
The following components required are for this scenario:
* A SAML **identity provider** with the ability to receive, decode, and respond to SAML requests from Azure AD B2C. * A publicly available SAML **metadata endpoint** for your identity provider. * An [Azure AD B2C tenant](tutorial-create-tenant.md).
-
+ ## Create a policy key
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider.md
Previously updated : 04/05/2021 Last updated : 04/30/2021
If you don't yet have a SAML application and an associated metadata endpoint, yo
[SAML Test Application][samltest] + ## Set up certificates To build a trust relationship between your application and Azure AD B2C, both services must be able to create and validate each other's signatures. You configure a configure X509 certificates in Azure AD B2C, and your application.
active-directory Application Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-migration.md
- Title: Upgrade to Azure Active Directory Application Proxy
-description: Choose which proxy solution is best if you're upgrading from Microsoft Forefront or Unified Access Gateway.
------- Previously updated : 04/29/2021----
-# Compare remote access solutions
-
-Azure Active Directory Application Proxy is one of two remote access solutions that Microsoft offers. The other is Web Application Proxy, the on-premises version. These two solutions replace earlier products that Microsoft offered: Microsoft Forefront Threat Management Gateway (TMG) and Unified Access Gateway (UAG). Use this article to understand how these four solutions compare to each other. For those of you still using the deprecated TMG or UAG solutions, use this article to help plan your migration to one of the Application Proxy.
--
-## Feature comparison
-
-Use this table to understand how Threat Management Gateway (TMG), Unified Access Gateway (UAG), Web Application Proxy (WAP), and Azure AD Application Proxy (AP) compare to each other.
-
-| Feature | TMG | UAG | WAP | AP |
-| - | | | | |
-| Certificate authentication | Yes | Yes | - | - |
-| Selectively publish browser apps | Yes | Yes | Yes | Yes |
-| Preauthentication and single sign-on | Yes | Yes | Yes | Yes |
-| Layer 2/3 firewall | Yes | Yes | - | - |
-| Forward proxy capabilities | Yes | - | - | - |
-| VPN capabilities | Yes | Yes | - | - |
-| Rich protocol support | - | Yes | Yes, if running over HTTP | Yes, if running over HTTP or through Remote Desktop Gateway |
-| Serves as ADFS proxy server | - | Yes | Yes | - |
-| One portal for application access | - | Yes | - | Yes |
-| Response body link translation | Yes | Yes | - | Yes |
-| Authentication with headers | - | Yes | - | Yes, with PingAccess |
-| Cloud-scale security | - | - | - | Yes |
-| Conditional Access | - | Yes | - | Yes |
-| No components in the demilitarized zone (DMZ) | - | - | - | Yes |
-| No inbound connections | - | - | - | Yes |
-
-For most scenarios, we recommend Azure AD Application Proxy as the modern solution. Web Application Proxy is only preferred in scenarios that require a proxy server for AD FS, and you can't use custom domains in Azure Active Directory.
-
-Azure AD Application Proxy offers unique benefits when compared to similar products, including:
--- Extending Azure AD to on-premises resources
- - Cloud-scale security and protection
- - Features like Conditional Access and Multi-Factor Authentication are easy to enable
-- No components in the demilitarized zone-- No inbound connections required-- One My Apps page that your users can go to for all their applications, including Microsoft 365, Azure AD integrated SaaS apps, and your on-premises web apps. --
-## Next steps
--- [Use Azure AD Application Proxy to provide secure remote access to on-premises applications](application-proxy.md)
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
This setting works with all browsers. However, to satisfy a device policy, like
> [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario.
+> Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
#### Why do I see a certificate prompt in the browser
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/whats-new-docs.md
Previously updated : 12/15/2020 Last updated : 04/30/2021
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## April 2021
+
+### New articles
+
+- [Claims mapping policy type](reference-claims-mapping-policy-type.md)
+- [How to migrate a Node.js app from ADAL to MSAL](msal-node-migration.md)
+
+### Updated articles
+
+- [Configurable token lifetimes in the Microsoft identity platform (preview)](active-directory-configurable-token-lifetimes.md)
+- [Configure token lifetime policies (preview)](configure-token-lifetimes.md)
+- [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md)
+- [Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md)
+- [Quickstart: Sign in users and get an access token in a Node web app using the auth code flow](quickstart-v2-nodejs-webapp-msal.md)
+- [Quickstart: Sign in users and get an access token in an Angular single-page application](quickstart-v2-angular.md)
+- [Single-page application: Acquire a token to call an API](scenario-spa-acquire-token.md)
+- [Single-page application: Code configuration](scenario-spa-app-configuration.md)
+- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md)
+- [Use MSAL in a national cloud environment](msal-national-cloud.md)
+- [Understanding Azure AD application consent experiences](application-consent-experience.md)
+ ## March 2021 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Quickstart: Set up a tenant](quickstart-create-new-tenant.md) - [Quickstart: Register an application with the Microsoft identity platform](quickstart-register-app.md) - [Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity](quickstart-v2-java-daemon.md)-
-## January 2021
-
-### New articles
--- [Logging in MSAL for Android](msal-logging-android.md)-- [Logging in MSAL.NET](msal-logging-dotnet.md)-- [Logging in MSAL for iOS/macOS](msal-logging-ios.md)-- [Logging in MSAL for Java](msal-logging-java.md)-- [Logging in MSAL.js](msal-logging-js.md)-- [Logging in MSAL for Python](msal-logging-python.md)-
-### Updated articles
--- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)-- [Application model](application-model.md)-- [Authentication vs. authorization](authentication-vs-authorization.md)-- [How to: Restrict your Azure AD app to a set of users in an Azure AD tenant](howto-restrict-your-app-to-a-set-of-users.md)-- [Permissions and consent in the Microsoft identity platform endpoint](v2-permissions-and-consent.md)-- [Configurable token lifetimes in Microsoft identity platform (preview)](active-directory-configurable-token-lifetimes.md)-- [Configure token lifetime policies (preview)](configure-token-lifetimes.md)-- [Microsoft identity platform authentication libraries](reference-v2-libraries.md)-- [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md)
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/manage-stale-devices.md
Previously updated : 06/28/2019 Last updated : 04/30/2021
Ideally, to complete the lifecycle, registered devices should be unregistered when they are not needed anymore. However, due to, for example, lost, stolen, broken devices, or OS reinstallations you typically have stale devices in your environment. As an IT admin, you probably want a method to remove stale devices, so that you can focus your resources on managing devices that actually require management. In this article, you learn how to efficiently manage stale devices in your environment.
-
## What is a stale device?
You have two options to retrieve the value of the activity timestamp:
- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet
- :::image type="content" source="./media/manage-stale-devices/02.png" alt-text="Screenshot showing command line output. One line is highlighted and lists a time stamp for the ApproximateLastLogonTimeStamp value." border="false":::
+ :::image type="content" source="./media/manage-stale-devices/02.png" alt-text="Screenshot showing command-line output. One line is highlighted and lists a time stamp for the ApproximateLastLogonTimeStamp value." border="false":::
## Plan the cleanup of your stale devices
Define a timeframe that is your indicator for a stale device. When defining your
### Disable devices
-It is not advisable to immediately delete a device that appears to be stale because you can't undo a deletion in the case of false positives. As a best practice, disable a device for a grace period before deleting it. In your policy, define a timeframe to disable a device before deleting it.
+It is not advisable to immediately delete a device that appears to be stale because you can't undo a deletion if there is a false positive. As a best practice, disable a device for a grace period before deleting it. In your policy, define a timeframe to disable a device before deleting it.
### MDM-controlled devices
If your device is under control of Intune or any other MDM solution, retire the
### System-managed devices
-Don't delete system-managed devices. These are generally devices such as Autopilot. Once deleted, these devices can't be reprovisioned. The new `Get-AzureADDevice` cmdlet excludes system-managed devices by default.
+Don't delete system-managed devices. These devices are generally devices such as Autopilot. Once deleted, these devices can't be reprovisioned. The new `Get-AzureADDevice` cmdlet excludes system-managed devices by default.
### Hybrid Azure AD joined devices Your hybrid Azure AD joined devices should follow your policies for on-premises stale device management.
-To cleanup Azure AD:
+To clean up Azure AD:
- **Windows 10 devices** - Disable or delete Windows 10 devices in your on-premises AD, and let Azure AD Connect synchronize the changed device status to Azure AD. - **Windows 7/8** - Disable or delete Windows 7/8 devices in your on-premises AD first. You can't use Azure AD Connect to disable or delete Windows 7/8 devices in Azure AD. Instead, when you make the change in your on-premises, you must disable/delete in Azure AD.
To cleanup Azure AD:
>* Removing the device from sync scope for Windows 10/Server 2016 devices will delete the Azure AD device. Adding it back to sync scope will place a new object in "Pending" state. A re-registration of the device is required. >* If you not using Azure AD Connect for Windows 10 devices to synchronize (e.g. ONLY using AD FS for registration), you must manage lifecycle similar to Windows 7/8 devices. - ### Azure AD joined devices Disable or delete Azure AD joined devices in the Azure AD.
Disable or delete Azure AD registered devices in the Azure AD.
## Clean up stale devices in the Azure portal
-While you can cleanup stale devices in the Azure portal, it is more efficient, to handle this process using a PowerShell script. Use the latest PowerShell V2 module to use the timestamp filter and to filter out system-managed devices such as Autopilot.
+While you can clean up stale devices in the Azure portal, it is more efficient, to handle this process using a PowerShell script. Use the latest PowerShell V2 module to use the timestamp filter and to filter out system-managed devices such as Autopilot.
A typical routine consists of the following steps:
$dt = [datetime]ΓÇÖ2017/01/01ΓÇÖ
Get-AzureADDevice -All:$true | Where {$_.ApproximateLastLogonTimeStamp -le $dt} | select-object -Property Enabled, DeviceId, DisplayName, DeviceTrustType, ApproximateLastLogonTimestamp | export-csv devicelist-olderthan-Jan-1-2017-summary.csv ```
+#### Set devices to disabled
+
+Using the same commands we can pipe the output to the set command to disable the devices over a certain age.
+
+```powershell
+$dt = [datetime]ΓÇÖ2017/01/01ΓÇÖ
+Get-AzureADDevice -All:$true | Where {$_.ApproximateLastLogonTimeStamp -le $dt} | Set-AzureADDevice
+```
+ ## What you should know ### Why is the timestamp not updated more frequently?
-The timestamp is updated to support device lifecycle scenarios. This is not an audit. Use the sign-in audit logs for more frequent updates on the device.
+The timestamp is updated to support device lifecycle scenarios. This attribute is not an audit. Use the sign-in audit logs for more frequent updates on the device.
### Why should I worry about my BitLocker keys?
-When configured, BitLocker keys for Windows 10 devices are stored on the device object in Azure AD. If you delete a stale device, you also delete the BitLocker keys that are stored on the device. You should determine whether your cleanup policy aligns with the actual lifecycle of your device before deleting a stale device.
+When configured, BitLocker keys for Windows 10 devices are stored on the device object in Azure AD. If you delete a stale device, you also delete the BitLocker keys that are stored on the device. Confirm that your cleanup policy aligns with the actual lifecycle of your device before deleting a stale device.
### Why should I worry about Windows Autopilot devices? When you delete an Azure AD device that was associated with a Windows Autopilot object the following three scenarios can occur if the device will be repurposed in future: - With Windows Autopilot user-driven deployments without using pre-provisioning, a new Azure AD device will be created, but it wonΓÇÖt be tagged with the ZTDID.-- With Windows Autopilot self-deploying mode deployments, they will fail because an associate Azure AD device cannot be found. (This is a security mechanism to make sure that no ΓÇ£imposterΓÇ¥ devices try to join Azure AD with no credentials.) The failure will indicate a ZTDID mismatch.
+- With Windows Autopilot self-deploying mode deployments, they will fail because an associate Azure AD device cannot be found. (This failure is a security mechanism to make sure that no ΓÇ£imposterΓÇ¥ devices try to join Azure AD with no credentials.) The failure will indicate a ZTDID mismatch.
- With Windows Autopilot pre-provisioning deployments, they will fail because an associated Azure AD device cannot be found. (Behind the scenes, pre-provisioning deployments use the same self-deploying mode process, so they enforce the same security mechanisms.) ### How do I know all the type of devices joined?
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Previously updated : 3/31/2021 Last updated : 4/30/2021
The What's new in Azure Active Directory? release notes provide information abou
+## October 2020
+
+### Azure AD On-Premises Hybrid Agents Impacted by Azure TLS Certificate Changes
+
+**Type:** Plan for change
+**Service category:** N/A
+**Product capability:** Platform
+
+Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This update is due to the current CA certificates not complying with one of the CA/Browser Forum Baseline requirements. This change will impact Azure AD hybrid agents installed on-premises that have hardened environments with a fixed list of root certificates and will need to be updated to trust the new certificate issuers.
+
+This change will result in disruption of service if you don't take action immediately. These agents include [Application Proxy connectors](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AppProxy) for remote access to on-premises, [Passthrough Authentication](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that allow your users to sign in to applications using the same passwords, and [Cloud Provisioning Preview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that perform AD to Azure AD sync.
+
+If you have an environment with firewall rules set to allow outbound calls to only specific Certificate Revocation List (CRL) download, you will need to allow the following CRL and OCSP URLs. For full details on the change and the CRL and OCSP URLs to enable access to, see [Azure TLS certificate changes](../../security/fundamentals/tls-certificate-changes.md).
+++
+### Provisioning events will be removed from audit logs and published solely to provisioning logs
+
+**Type:** Plan for change
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+Activity by the SCIM [provisioning service](../app-provisioning/user-provisioning.md) is logged in both the audit logs and provisioning logs. This includes activity such as the creation of a user in ServiceNow, group in GSuite, or import of a role from AWS. In the future, these events will only be published in the provisioning logs. This change is being implemented to avoid duplicate events across logs, and additional costs incurred by customers consuming the logs in log analytics.
+
+We'll provide an update when a date is completed. This deprecation isn't planned for the calendar year 2020.
+
+> [!NOTE]
+> This does not impact any events in the audit logs outside of the synchronization events emitted by the provisioning service. Events such as the creation of an application, conditional access policy, a user in the directory, etc. will continue to be emitted in the audit logs. [Learn more](../reports-monitoring/concept-provisioning-logs.md?context=azure%2factive-directory%2fapp-provisioning%2fcontext%2fapp-provisioning-context).
+
+++
+### Azure AD On-Premises Hybrid Agents Impacted by Azure Transport Layer Security (TLS) Certificate Changes
+
+**Type:** Plan for change
+**Service category:** N/A
+**Product capability:** Platform
+
+Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). There will be an update because of the current CA certificates not following one of the CA/Browser Forum Baseline requirements. This change will impact Azure AD hybrid agents installed on-premises that have hardened environments with a fixed list of root certificates. These agents will need to be updated to trust the new certificate issuers.
+
+This change will result in disruption of service if you don't take action immediately. These agents include:
+- [Application Proxy connectors](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AppProxy) for remote access to on-premises
+- [Passthrough Authentication](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that allow your users to sign in to applications using the same passwords
+- [Cloud Provisioning Preview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that do AD to Azure AD sync.
+
+If you have an environment with firewall rules set to allow outbound calls to only specific Certificate Revocation List (CRL) download, you'll need to allow CRL and OCSP URLs. For full details on the change and the CRL and OCSP URLs to enable access to, see [Azure TLS certificate changes](../../security/fundamentals/tls-certificate-changes.md).
+
++
+### Azure Active Directory TLS 1.0, TLS 1.1, and 3DES Deprecation in US Gov Cloud
+
+**Type:** Plan for change
+**Service category:** All Azure AD applications
+**Product capability:** Standards
+
+Azure Active Directory will deprecate the following protocols starting March 31, 2021:
+- TLS 1.0
+- TLS 1.1
+- 3DES cipher suite (TLS_RSA_WITH_3DES_EDE_CBC_SHA)
+
+All client-server and browser-server combinations should use TLS 1.2 and modern cipher suites to maintain a secure connection to Azure Active Directory for Azure, Office 365, and Microsoft 365 services.
+
+Affected environments are:
+- Azure US Gov
+- [Office 365 GCC High & DoD](/microsoft-365/compliance/tls-1-2-in-office-365-gcc)
+
+For guidance to remove deprecating protocols dependencies, please refer to [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
+
++
+### Assign applications to roles on administrative unit and object scope
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+This feature enables the ability to assign an application (SPN) to an administrator role on the administrative unit scope. To learn more, refer to [Assign scoped roles to an administrative unit](../roles/admin-units-assign-roles.md).
+++
+### Now you can disable and delete guest users when they're denied access to a resource
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Disable and delete is an advanced control in Azure AD Access Reviews to help organizations better manage external guests in Groups and Apps. If guests are denied in an access review, **disable and delete** will automatically block them from signing in for 30 days. After 30 days, then they'll be removed from the tenant altogether.
+
+For more information about this feature, see [Disable and delete external identities with Azure AD Access Reviews](../governance/access-reviews-external-users.md#disable-and-delete-external-identities-with-azure-ad-access-reviews).
+
++
+### Access Review creators can add custom messages in emails to reviewers
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+In Azure AD access reviews, administrators creating reviews can now write a custom message to the reviewers. Reviewers will see the message in the email they receive that prompts them to complete the review. To learn more about using this feature, see step 14 of the [Create one or more access reviews](../governance/create-access-review.md#create-one-or-more-access-reviews) section.
+++
+### New provisioning connectors in the Azure AD Application Gallery - October 2020
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Apple Business Manager](../saas-apps/apple-business-manager-provision-tutorial.md)
+- [Apple School Manager](../saas-apps/apple-school-manager-provision-tutorial.md)
+- [Code42](../saas-apps/code42-provisioning-tutorial.md)
+- [AlertMedia](../saas-apps/alertmedia-provisioning-tutorial.md)
+- [OpenText Directory Services](../saas-apps/open-text-directory-services-provisioning-tutorial.md)
+- [Cinode](../saas-apps/cinode-provisioning-tutorial.md)
+- [Global Relay Identity Sync](../saas-apps/global-relay-identity-sync-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+### Integration assistant for Azure AD B2C
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+The Integration Assistant (preview) experience is now available for Azure AD B2C App registrations. This experience helps guide you in configuring your application for common scenarios.. Learn more about [Microsoft identity platform best practices and recommendations](../develop/identity-platform-integration-checklist.md).
+
++
+### View role template ID in Azure portal UI
+
+**Type:** New feature
+**Service category:** Azure roles
+**Product capability:** Access Control
+
+
+You can now view the template ID of each Azure AD role in the Azure portal. In Azure AD, select **description** of the selected role.
+
+It's recommended that customers use role template IDs in their PowerShell script and code, instead of the display name. Role template ID is supported for use to [directoryRoles](/graph/api/resources/directoryrole) and [roleDefinition](/graph/api/resources/unifiedroledefinition?view=graph-rest-beta&preserve-view=true) objects. For more information on role template IDs, see [Azure AD built-in roles](../roles/permissions-reference.md).
+++
+### API connectors for Azure AD B2C sign-up user flows is now in public preview
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+
+API connectors are now available for use with Azure Active Directory B2C. API connectors enable you to use web APIs to customize your sign-up user flows and integrate with external cloud systems. You can you can use API connectors to:
+
+- Integrate with custom approval workflows
+- Validate user input data
+- Overwrite user attributes
+- Run custom business logic
+
+ Visit the [Use API connectors to customize and extend sign-up](../../active-directory-b2c/api-connectors-overview.md) documentation to learn more.
+++
+### State property for connected organizations in entitlement management
+
+**Type:** New feature
+**Service category:** Directory Management
+**Product capability:** Entitlement Management
+
+
+ All connected organizations will now have an additional property called "State". The state will control how the connected organization will be used in policies that refer to "all configured connected organizations". The value will be either "configured" (meaning the organization is in the scope of policies that use the "all" clause) or "proposed" (meaning that the organization isn't in scope).
+
+Manually created connected organizations will have a default setting of "configured". Meanwhile, automatically created ones (created via policies that allow any user from the internet to request access) will default to "proposed." Any connected organizations created before September 9 2020 will be set to "configured." Admins can update this property as needed. [Learn more](../governance/entitlement-management-organization.md#managing-a-connected-organization-programmatically).
+
+++
+### Azure Active Directory External Identities now has premium advanced security settings for B2C
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+Risk-based Conditional Access and risk detection features of Identity Protection are now available in [Azure AD B2C](../..//active-directory-b2c/conditional-access-identity-protection-overview.md). With these advanced security features, customers can now:
+- Leverage intelligent insights to assess risk with B2C apps and end user accounts. Detections include atypical travel, anonymous IP addresses, malware-linked IP addresses, and Azure AD threat intelligence. Portal and API-based reports are also available.
+- Automatically address risks by configuring adaptive authentication policies for B2C users. App developers and administrators can mitigate real-time risk by requiring multi-factor authentication (MFA) or blocking access depending on the user risk level detected, with additional controls available based on location, group, and app.
+- Integrate with Azure AD B2C user flows and custom policies. Conditions can be triggered from built-in user flows in Azure AD B2C or can be incorporated into B2C custom policies. As with other aspects of the B2C user flow, end user experience messaging can be customized. Customization is according to the organizationΓÇÖs voice, brand, and mitigation alternatives.
+
++
+### New Federated Apps available in Azure AD Application gallery - October 2020
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In October 2020 we have added following 27 new applications in our App gallery with Federation support:
+
+[Sentry](../saas-apps/sentry-tutorial.md), [Bumblebee - Productivity Superapp](https://app.yellowmessenger.com/user/login), [ABBYY FlexiCapture Cloud](../saas-apps/abbyy-flexicapture-cloud-tutorial.md), [EAComposer](../saas-apps/eacomposer-tutorial.md), [Genesys Cloud Integration for Azure](https://apps.mypurecloud.com/msteams-integration/), [Zone Technologies Portal](https://portail.zonetechnologie.com/signin), [Beautiful.ai](../saas-apps/beautiful.ai-tutorial.md), [Datawiza Access Broker](https://console.datawiza.com/), [ZOKRI](https://app.zokri.com/), [CheckProof](../saas-apps/checkproof-tutorial.md), [Ecochallenge.org](https://events.ecochallenge.org/users/login), [atSpoke](http://atspoke.com/login), [Appointment Reminder](https://app.appointmentreminder.co.nz/account/login), [Cloud.Market](https://cloud.market/), [TravelPerk](../saas-apps/travelperk-tutorial.md), [Greetly](https://app.greetly.com/), [OrgVitality SSO](../saas-apps/orgvitality-sso-tutorial.md), [Web Cargo Air](../saas-apps/web-cargo-air-tutorial.md), [Loop Flow CRM](../saas-apps/loop-flow-crm-tutorial.md), [Starmind](../saas-apps/starmind-tutorial.md), [Workstem](https://hrm.workstem.com/login), [Retail Zipline](../saas-apps/retail-zipline-tutorial.md), [Hoxhunt](../saas-apps/hoxhunt-tutorial.md), [MEVISIO](../saas-apps/mevisio-tutorial.md), [Samsara](../saas-apps/samsara-tutorial.md), [Nimbus](../saas-apps/nimbus-tutorial.md), [Pulse Secure virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Provisioning logs can now be streamed to log analytics
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+
+Publish your provisioning logs to log analytics in order to:
+- Store provisioning logs for more than 30 days
+- Define custom alerts and notifications
+- Build dashboards to visualize the logs
+- Execute complex queries to analyze the logs
+
+To learn how to use the feature, see [Understand how provisioning integrates with Azure Monitor logs](../app-provisioning/application-provisioning-log-analytics.md).
+
++
+### Provisioning logs can now be viewed by application owners
+
+**Type:** Changed feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+You can now allow application owners to monitor activity by the provisioning service and troubleshoot issues without providing them a privileged role or making IT a bottleneck. [Learn more](../reports-monitoring/concept-provisioning-logs.md).
+
++
+### Renaming 10 Azure Active Directory roles
+
+**Type:** Changed feature
+**Service category:** Azure roles
+**Product capability:** Access Control
+
+Some Azure Active Directory (AD) built-in roles have names that differ from those that appear in Microsoft 365 admin center, the Azure AD portal, and Microsoft Graph. This inconsistency can cause problems in automated processes. With this update, we're renaming 10 role names to make them consistent. The following table has the new role names:
+
+![Table showing role names in MS Graph API and the Azure portal, and the proposed new role name in M365 Admin Center, Azure portal, and API.](media/whats-new/azure-role.png)
+++
+### Azure AD B2C support for auth code flow for SPAs using MSAL JS 2.x
+
+**Type:** Changed feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+MSAL.js version 2.x now includes support for the authorization code flow for single-page web apps (SPAs). Azure AD B2C will now support the use of the SPA app type on the Azure portal and the use of MSAL.js authorization code flow with PKCE for single-page apps. This will allow SPAs using Azure AD B2C to maintain SSO with newer browsers and abide by newer authentication protocol recommendations. Get started with the [Register a single-page application (SPA) in Azure Active Directory B2C](../../active-directory-b2c/tutorial-register-spa.md) tutorial.
+++
+### Updates to Remember Multi-Factor Authentication (MFA) on a trusted device setting
+
+**Type:** Changed feature
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+
+We've recently updated the [remember Multi-Factor Authentication (MFA)](../authentication/howto-mfa-mfasettings.md#remember-multi-factor-authentication) on a trusted device feature to extend authentication for up to 365 days. Azure Active Directory (Azure AD) Premium licenses, can also use the [Conditional Access ΓÇô Sign-in Frequency policy](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) that provides more flexibility for reauthentication settings.
+
+For the optimal user experience, we recommend using Conditional Access sign-in frequency to extend session lifetimes on trusted devices, locations, or low-risk sessions as an alternative to the remember MFA on a trusted device setting. To get started, review our [latest guidance on optimizing the reauthentication experience](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+++ ## September 2020 ### New provisioning connectors in the Azure AD Application Gallery - September 2020
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Previously updated : 3/31/2021 Last updated : 4/30/2021
This page is updated monthly, so revisit it regularly. If you're looking for ite
+## April 2021
+
+### Bug fixed - Azure AD will no longer double-encode the state parameter in responses
+
+**Type:** Fixed
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Azure AD has identified, tested, and released a fix for a bug in the `/authorize` response to a client application. Azure AD was incorrectly URL encoding the `state` parameter twice when sending responses back to the client. This can cause a client application to reject the request, due to a mismatch in state parameters. [Learn more](../develop/reference-breaking-changes.md#bug-fix-azure-ad-will-no-longer-url-encode-the-state-parameter-twice).
+++
+### Users can only create security and Microsoft 365 groups in Azure portal being deprecated
+
+**Type:** Plan for change
+**Service category:** Group Management
+**Product capability:** Directory
+
+Users will no longer be limited to create security and Microsoft 365 groups only in the Azure portal. The new setting will allow users to create security groups in the Azure portal, PowerShell, and API. Users will be required to verify and update the new setting. [Learn more](../enterprise-users/groups-self-service-management.md).
+++
+### Public Preview - External Identities Self-Service Sign-up in AAD using Email One-Time Passcode accounts
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+External users can now use Email One-Time Passcode accounts to sign up or sign in to Azure AD 1st party and line-of-business applications. [Learn more](../external-identities/one-time-passcode.md).
+++
+### General Availability - External Identities Self-Service Sign Up
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Self-service sign-up for external users is now in general availability. With this new feature, external users can now self-service sign up to an application.
+
+You can create customized experiences for these external users, including collecting information about your users during the registration process and allowing external identity providers like Facebook and Google. You can also integrate with third-party cloud providers for various functionalities like identity verification or approval of users. [Learn more](../external-identities/self-service-sign-up-overview.md).
+
++
+### General availability - Azure AD B2C Phone Sign-up and Sign-in using Built-in Policy
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+B2C Phone Sign-up and Sign-in using a built-in policy enable IT administrators and developers of organizations to allow their end-users to sign in and sign-up using a phone number in user flows. With this feature, disclaimer links such as privacy policy and terms of use can be customized and shown on the page before the end-user proceeds to receive the one-time passcode via text message. [Learn more](../../active-directory-b2c/phone-authentication-user-flows.md).
+
++
+### New Federated Apps available in Azure AD Application gallery - April 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In April 2021, we have added following 31 new applications in our App gallery with Federation support
+
+[Zii Travel Azure AD Connect](http://ziitravel.com/), [Cerby](../saas-apps/cerby-tutorial.md), [Selflessly](https://app.selflessly.io/sign-in), [Apollo CX](https://apollo.cxlabs.de/sso/aad), [Pedagoo](https://account.pedagoo.com/), [Measureup](https://account.measureup.com/), [Wistec Education](https://wisteceducation.fi/login/index.php), [ProcessUnity](../saas-apps/processunity-tutorial.md), [Cisco Intersight](../saas-apps/cisco-intersight-tutorial.md), [Codility](../saas-apps/codility-tutorial.md), [H5mag](https://account.h5mag.com/auth/request-access/ms365), [Check Point Identity Awareness](../saas-apps/check-point-identity-awareness-tutorial.md), [Jarvis](https://jarvis.live/login), [desknet's NEO](../saas-apps/desknets-neo-tutorial.md), [SDS & Chemical Information Management](../saas-apps/sds-chemical-information-management-tutorial.md), [W├║ru App](../saas-apps/wuru-app-tutorial.md), [Holmes](../saas-apps/holmes-tutorial.md), [Tide Multi Tenant](https://gallery.tideapp.co.uk/), [Telenor](https://admin.smartansatt.telenor.no/), [Yooz US](https://us1.getyooz.com/?kc_idp_hint=microsoft), [Mooncamp](https://app.mooncamp.com/#/login), [inwise SSO](https://app.inwise.com/defaultsso.aspx), [Ecolab Digital Solutions](https://ecolabb2c.b2clogin.com/account.ecolab.com/oauth2/v2.0/authorize?p=B2C_1A_Connect_OIDC_SignIn&client_id=01281626-dbed-4405-a430-66457825d361&nonce=defaultNonce&redirect_uri=https://jwt.ms&scope=openid&response_type=id_token&prompt=login), [Taguchi Digital Marketing System](https://login.taguchi.com.au/), [XpressDox EU Cloud](https://test.xpressdox.com/Authentication/Login.aspx), [EZSSH](https://docs.keytos.io/getting-started/registering-a-new-tenant/registering_app_in_tenant/), [EZSSH Client](https://portal.ezssh.io/signup), [Verto 365](https://www.vertocloud.com/Login/), [KPN Grip](https://www.grip-on-it.com/), [AddressLook](https://portal.bbsonlineservices.net/Manage/AddressLook), [Cornerstone Single Sign-On](../saas-apps/cornerstone-ondemand-tutorial.md)
+
+You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
+++
+### New provisioning connectors in the Azure AD Application Gallery - April 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Bentley - Automatic User Provisioning](../saas-apps/bentley-automatic-user-provisioning-tutorial.md)
+- [Boxcryptor](../saas-apps/boxcryptor-provisioning-tutorial.md)
+- [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-provisioning-tutorial.md)
+- [Eletive](../saas-apps/eletive-provisioning-tutorial.md)
+- [Jostle](../saas-apps/jostle-provisioning-tutorial.md)
+- [Olfeo SAAS](../saas-apps/olfeo-saas-provisioning-tutorial.md)
+- [Proware](../saas-apps/proware-provisioning-tutorial.md)
+- [Segment](../saas-apps/segment-provisioning-tutorial.md)
+
+For more information about how to better secure your organization with automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+### Introducing new versions of page layouts for B2C
+
+**Type:** Changed feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+The [page layouts](../../active-directory-b2c/page-layout.md) for B2C scenarios on the Azure AD B2C has been updated to reduce security risks by introducing the new versions of jQuery and Handlebars JS.
+
++
+### Updates to Sign-in Diagnostic
+
+**Type:** Changed feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+The scenario coverage of the Sign-in Diagnostic tool has increased.
+
+With this update, the following event-related scenarios will now be included in the sign-in diagnosis results:
+- Enterprise Applications configuration problem events.
+- Enterprise Applications service provider (application-side) events.
+- Incorrect credentials events.
+
+These results will show contextual and relevant details about the event and actions to take to resolve these problems. Also, for scenarios where we don't have deep contextual diagnostics, Sign-in Diagnostic will present more descriptive content about the error event.
+
+For more information, see [What is sign-in diagnostic in Azure AD?](../reports-monitoring/overview-sign-in-diagnostics.md)
+++ ## March 2021 ### Guidance on how to enable support for TLS 1.2 in your environment, in preparation for upcoming Azure AD TLS 1.0/1.1 deprecation
Affected environments include:
- Azure Commercial Cloud - Office 365 GCC and WW
-For additional guidance, refer to [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
+For more information, see [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
External users will now be able to use Email One-Time Passcode accounts to sign
**Service category:** Authentications (Logins) **Product capability:** Monitoring & Reporting
-AD FS sign-in activity can now be integrated with Azure AD activity reporting, providing a unified view of hybrid identity infrastructure. Using the Azure AD Sign-Ins report, Log Analytics, and Azure Monitor Workbooks, it's possible to perform in-depth analysis for both AAD and AD FS sign-in scenarios such as AD FS account lockouts, bad password attempts, and spikes of unexpected sign-in attempts.
+AD FS sign-in activity can now be integrated with Azure AD activity reporting, providing a unified view of hybrid identity infrastructure. Using the Azure AD Sign-Ins report, Log Analytics, and Azure Monitor Workbooks, it's possible to do in-depth analysis for both AAD and AD FS sign-in scenarios such as AD FS account lockouts, bad password attempts, and spikes of unexpected sign-in attempts.
To learn more, visit [AD FS sign-ins in Azure AD with Connect Health](../hybrid/how-to-connect-health-ad-fs-sign-in.md).
Affected environments are:
- Azure Commercial Cloud - Office 365 GCC and WW
-Related announcement
-All client-server and browser-server combinations should use TLS 1.2 and modern cipher suites to maintain a secure connection to Azure Active Directory for Azure, Office 365, and Microsoft 365 services. This is change is related to [Azure Active Directory TLS 1.0 & 1.1, and 3DES Cipher Suite Deprecation in US Gov Cloud](whats-new.md#azure-active-directory-tls-10-tls-11-and-3des-deprecation-in-us-gov-cloud).
- For guidance to remove deprecating protocols dependencies, please refer to [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
Enhanced dynamic group service is now in Public Preview. New customers that crea
The new service also aims to complete member addition and removal because of attribute changes within a few minutes. Also, single processing failures won't block tenant processing. To learn more about creating dynamic groups, see our [documentation](../enterprise-users/groups-create-rule.md).
-## October 2020
-
-### Azure AD On-Premises Hybrid Agents Impacted by Azure TLS Certificate Changes
-
-**Type:** Plan for change
-**Service category:** N/A
-**Product capability:** Platform
-
-Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This update is due to the current CA certificates not complying with one of the CA/Browser Forum Baseline requirements. This change will impact Azure AD hybrid agents installed on-premises that have hardened environments with a fixed list of root certificates and will need to be updated to trust the new certificate issuers.
-
-This change will result in disruption of service if you don't take action immediately. These agents include [Application Proxy connectors](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AppProxy) for remote access to on-premises, [Passthrough Authentication](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that allow your users to sign in to applications using the same passwords, and [Cloud Provisioning Preview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that perform AD to Azure AD sync.
-
-If you have an environment with firewall rules set to allow outbound calls to only specific Certificate Revocation List (CRL) download, you will need to allow the following CRL and OCSP URLs. For full details on the change and the CRL and OCSP URLs to enable access to, see [Azure TLS certificate changes](../../security/fundamentals/tls-certificate-changes.md).
---
-### Provisioning events will be removed from audit logs and published solely to provisioning logs
-
-**Type:** Plan for change
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-Activity by the SCIM [provisioning service](../app-provisioning/user-provisioning.md) is logged in both the audit logs and provisioning logs. This includes activity such as the creation of a user in ServiceNow, group in GSuite, or import of a role from AWS. In the future, these events will only be published in the provisioning logs. This change is being implemented to avoid duplicate events across logs, and additional costs incurred by customers consuming the logs in log analytics.
-
-We'll provide an update when a date is completed. This deprecation isn't planned for the calendar year 2020.
-
-> [!NOTE]
-> This does not impact any events in the audit logs outside of the synchronization events emitted by the provisioning service. Events such as the creation of an application, conditional access policy, a user in the directory, etc. will continue to be emitted in the audit logs. [Learn more](../reports-monitoring/concept-provisioning-logs.md?context=azure%2factive-directory%2fapp-provisioning%2fcontext%2fapp-provisioning-context).
-
---
-### Azure AD On-Premises Hybrid Agents Impacted by Azure Transport Layer Security (TLS) Certificate Changes
-
-**Type:** Plan for change
-**Service category:** N/A
-**Product capability:** Platform
-
-Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). There will be an update because of the current CA certificates not following one of the CA/Browser Forum Baseline requirements. This change will impact Azure AD hybrid agents installed on-premises that have hardened environments with a fixed list of root certificates. These agents will need to be updated to trust the new certificate issuers.
-
-This change will result in disruption of service if you don't take action immediately. These agents include:
-- [Application Proxy connectors](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AppProxy) for remote access to on-premises -- [Passthrough Authentication](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that allow your users to sign in to applications using the same passwords-- [Cloud Provisioning Preview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/AzureADConnect) agents that do AD to Azure AD sync. -
-If you have an environment with firewall rules set to allow outbound calls to only specific Certificate Revocation List (CRL) download, you'll need to allow CRL and OCSP URLs. For full details on the change and the CRL and OCSP URLs to enable access to, see [Azure TLS certificate changes](../../security/fundamentals/tls-certificate-changes.md).
-
--
-### Azure Active Directory TLS 1.0, TLS 1.1, and 3DES Deprecation in US Gov Cloud
-
-**Type:** Plan for change
-**Service category:** All Azure AD applications
-**Product capability:** Standards
-
-Azure Active Directory will deprecate the following protocols starting March 31, 2021:
-- TLS 1.0-- TLS 1.1-- 3DES cipher suite (TLS_RSA_WITH_3DES_EDE_CBC_SHA)-
-All client-server and browser-server combinations should use TLS 1.2 and modern cipher suites to maintain a secure connection to Azure Active Directory for Azure, Office 365, and Microsoft 365 services.
-
-Affected environments are:
-- Azure US Gov-- [Office 365 GCC High & DoD](/microsoft-365/compliance/tls-1-2-in-office-365-gcc)-
-For guidance to remove deprecating protocols dependencies, please refer to [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
-
--
-### Assign applications to roles on administrative unit and object scope
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-This feature enables the ability to assign an application (SPN) to an administrator role on the administrative unit scope. To learn more, refer to [Assign scoped roles to an administrative unit](../roles/admin-units-assign-roles.md).
---
-### Now you can disable and delete guest users when they're denied access to a resource
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Disable and delete is an advanced control in Azure AD Access Reviews to help organizations better manage external guests in Groups and Apps. If guests are denied in an access review, **disable and delete** will automatically block them from signing in for 30 days. After 30 days, then they'll be removed from the tenant altogether.
-
-For more information about this feature, see [Disable and delete external identities with Azure AD Access Reviews](../governance/access-reviews-external-users.md#disable-and-delete-external-identities-with-azure-ad-access-reviews).
-
--
-### Access Review creators can add custom messages in emails to reviewers
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-In Azure AD access reviews, administrators creating reviews can now write a custom message to the reviewers. Reviewers will see the message in the email they receive that prompts them to complete the review. To learn more about using this feature, see step 14 of the [Create one or more access reviews](../governance/create-access-review.md#create-one-or-more-access-reviews) section.
---
-### New provisioning connectors in the Azure AD Application Gallery - October 2020
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Apple Business Manager](../saas-apps/apple-business-manager-provision-tutorial.md)-- [Apple School Manager](../saas-apps/apple-school-manager-provision-tutorial.md)-- [Code42](../saas-apps/code42-provisioning-tutorial.md)-- [AlertMedia](../saas-apps/alertmedia-provisioning-tutorial.md)-- [OpenText Directory Services](../saas-apps/open-text-directory-services-provisioning-tutorial.md)-- [Cinode](../saas-apps/cinode-provisioning-tutorial.md)-- [Global Relay Identity Sync](../saas-apps/global-relay-identity-sync-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
-
--
-### Integration assistant for Azure AD B2C
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-The Integration Assistant (preview) experience is now available for Azure AD B2C App registrations. This experience helps guide you in configuring your application for common scenarios.. Learn more about [Microsoft identity platform best practices and recommendations](../develop/identity-platform-integration-checklist.md).
-
--
-### View role template ID in Azure portal UI
-
-**Type:** New feature
-**Service category:** Azure roles
-**Product capability:** Access Control
-
-
-You can now view the template ID of each Azure AD role in the Azure portal. In Azure AD, select **description** of the selected role.
-
-It's recommended that customers use role template IDs in their PowerShell script and code, instead of the display name. Role template ID is supported for use to [directoryRoles](/graph/api/resources/directoryrole) and [roleDefinition](/graph/api/resources/unifiedroledefinition?view=graph-rest-beta&preserve-view=true) objects. For more information on role template IDs, see [Azure AD built-in roles](../roles/permissions-reference.md).
---
-### API connectors for Azure AD B2C sign-up user flows is now in public preview
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-
-API connectors are now available for use with Azure Active Directory B2C. API connectors enable you to use web APIs to customize your sign-up user flows and integrate with external cloud systems. You can you can use API connectors to:
--- Integrate with custom approval workflows-- Validate user input data-- Overwrite user attributes -- Run custom business logic -
- Visit the [Use API connectors to customize and extend sign-up](../../active-directory-b2c/api-connectors-overview.md) documentation to learn more.
---
-### State property for connected organizations in entitlement management
-
-**Type:** New feature
-**Service category:** Directory Management
-**Product capability:** Entitlement Management
-
-
- All connected organizations will now have an additional property called "State". The state will control how the connected organization will be used in policies that refer to "all configured connected organizations". The value will be either "configured" (meaning the organization is in the scope of policies that use the "all" clause) or "proposed" (meaning that the organization isn't in scope).
-
-Manually created connected organizations will have a default setting of "configured". Meanwhile, automatically created ones (created via policies that allow any user from the internet to request access) will default to "proposed." Any connected organizations created before September 9 2020 will be set to "configured." Admins can update this property as needed. [Learn more](../governance/entitlement-management-organization.md#managing-a-connected-organization-programmatically).
-
---
-### Azure Active Directory External Identities now has premium advanced security settings for B2C
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-Risk-based Conditional Access and risk detection features of Identity Protection are now available in [Azure AD B2C](../..//active-directory-b2c/conditional-access-identity-protection-overview.md). With these advanced security features, customers can now:
-- Leverage intelligent insights to assess risk with B2C apps and end user accounts. Detections include atypical travel, anonymous IP addresses, malware-linked IP addresses, and Azure AD threat intelligence. Portal and API-based reports are also available.-- Automatically address risks by configuring adaptive authentication policies for B2C users. App developers and administrators can mitigate real-time risk by requiring multi-factor authentication (MFA) or blocking access depending on the user risk level detected, with additional controls available based on location, group, and app.-- Integrate with Azure AD B2C user flows and custom policies. Conditions can be triggered from built-in user flows in Azure AD B2C or can be incorporated into B2C custom policies. As with other aspects of the B2C user flow, end user experience messaging can be customized. Customization is according to the organizationΓÇÖs voice, brand, and mitigation alternatives.
-
--
-### New Federated Apps available in Azure AD Application gallery - October 2020
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In October 2020 we have added following 27 new applications in our App gallery with Federation support:
-
-[Sentry](../saas-apps/sentry-tutorial.md), [Bumblebee - Productivity Superapp](https://app.yellowmessenger.com/user/login), [ABBYY FlexiCapture Cloud](../saas-apps/abbyy-flexicapture-cloud-tutorial.md), [EAComposer](../saas-apps/eacomposer-tutorial.md), [Genesys Cloud Integration for Azure](https://apps.mypurecloud.com/msteams-integration/), [Zone Technologies Portal](https://portail.zonetechnologie.com/signin), [Beautiful.ai](../saas-apps/beautiful.ai-tutorial.md), [Datawiza Access Broker](https://console.datawiza.com/), [ZOKRI](https://app.zokri.com/), [CheckProof](../saas-apps/checkproof-tutorial.md), [Ecochallenge.org](https://events.ecochallenge.org/users/login), [atSpoke](http://atspoke.com/login), [Appointment Reminder](https://app.appointmentreminder.co.nz/account/login), [Cloud.Market](https://cloud.market/), [TravelPerk](../saas-apps/travelperk-tutorial.md), [Greetly](https://app.greetly.com/), [OrgVitality SSO](../saas-apps/orgvitality-sso-tutorial.md), [Web Cargo Air](../saas-apps/web-cargo-air-tutorial.md), [Loop Flow CRM](../saas-apps/loop-flow-crm-tutorial.md), [Starmind](../saas-apps/starmind-tutorial.md), [Workstem](https://hrm.workstem.com/login), [Retail Zipline](../saas-apps/retail-zipline-tutorial.md), [Hoxhunt](../saas-apps/hoxhunt-tutorial.md), [MEVISIO](../saas-apps/mevisio-tutorial.md), [Samsara](../saas-apps/samsara-tutorial.md), [Nimbus](../saas-apps/nimbus-tutorial.md), [Pulse Secure virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
-
-For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
---
-### Provisioning logs can now be streamed to log analytics
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-
-Publish your provisioning logs to log analytics in order to:
-- Store provisioning logs for more than 30 days-- Define custom alerts and notifications-- Build dashboards to visualize the logs-- Execute complex queries to analyze the logs -
-To learn how to use the feature, see [Understand how provisioning integrates with Azure Monitor logs](../app-provisioning/application-provisioning-log-analytics.md).
-
--
-### Provisioning logs can now be viewed by application owners
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-You can now allow application owners to monitor activity by the provisioning service and troubleshoot issues without providing them a privileged role or making IT a bottleneck. [Learn more](../reports-monitoring/concept-provisioning-logs.md).
-
--
-### Renaming 10 Azure Active Directory roles
-
-**Type:** Changed feature
-**Service category:** Azure roles
-**Product capability:** Access Control
-
-Some Azure Active Directory (AD) built-in roles have names that differ from those that appear in Microsoft 365 admin center, the Azure AD portal, and Microsoft Graph. This inconsistency can cause problems in automated processes. With this update, we're renaming 10 role names to make them consistent. The following table has the new role names:
-
-![Table showing role names in MS Graph API and the Azure portal, and the proposed new role name in M365 Admin Center, Azure portal, and API.](media/whats-new/azure-role.png)
---
-### Azure AD B2C support for auth code flow for SPAs using MSAL JS 2.x
-
-**Type:** Changed feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-MSAL.js version 2.x now includes support for the authorization code flow for single-page web apps (SPAs). Azure AD B2C will now support the use of the SPA app type on the Azure portal and the use of MSAL.js authorization code flow with PKCE for single-page apps. This will allow SPAs using Azure AD B2C to maintain SSO with newer browsers and abide by newer authentication protocol recommendations. Get started with the [Register a single-page application (SPA) in Azure Active Directory B2C](../../active-directory-b2c/tutorial-register-spa.md) tutorial.
---
-### Updates to Remember Multi-Factor Authentication (MFA) on a trusted device setting
-
-**Type:** Changed feature
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-
-We've recently updated the [remember Multi-Factor Authentication (MFA)](../authentication/howto-mfa-mfasettings.md#remember-multi-factor-authentication) on a trusted device feature to extend authentication for up to 365 days. Azure Active Directory (Azure AD) Premium licenses, can also use the [Conditional Access ΓÇô Sign-in Frequency policy](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) that provides more flexibility for reauthentication settings.
-
-For the optimal user experience, we recommend using Conditional Access sign-in frequency to extend session lifetimes on trusted devices, locations, or low-risk sessions as an alternative to the remember MFA on a trusted device setting. To get started, review our [latest guidance on optimizing the reauthentication experience](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
--
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-create.md
You can also create an access package using Microsoft Graph. A user in an appro
1. [List the accessPackageResources in the catalog](/graph/api/accesspackagecatalog-list?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog. 1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when subsequently creating an accessPackageResourceRoleScope.
-1. [Create an accessPackage](/graph/tutorial-access-package-api&view=graph-rest-beta&preserve-view=true).
+1. [Create an accessPackage](/graph/tutorial-access-package-api).
1. [Create an accessPackageAssignmentPolicy](/graph/api/accesspackageassignmentpolicy-post?tabs=http&view=graph-rest-beta&preserve-view=true). 1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package.
active-directory Application Management Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-fundamentals.md
This article contains recommendations and best practices for managing applicatio
| Use multiple connectors | Use two or more Application Proxy connectors for greater resiliency, availability, and scale (see [Application Proxy connectors](../app-proxy/application-proxy-connectors.md)). Create connector groups and ensure each connector group has at least two connectors (three connectors is optimal). | | Locate connector servers close to application servers, and make sure they're in the same domain | To optimize performance, physically locate the connector server close to the application servers (see [Network topology considerations](../app-proxy/application-proxy-network-topology.md)). Also, the connector server and web applications servers should belong to the same Active Directory domain, or they should span trusting domains. This configuration is required for SSO with Integrated Windows Authentication (IWA) and Kerberos Constrained Delegation (KCD). If the servers are in different domains, you'll need to use resource-based delegation for SSO (see [KCD for single sign-on with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md)). | | Enable auto-updates for connectors | Enable auto-updates for your connectors for the latest features and bug fixes. Microsoft provides direct support for the latest connector version and one version before. (See [Application Proxy release version history](../app-proxy/application-proxy-release-version-history.md).) |
-| Bypass your on-premises proxy | For easier maintenance, configure the connector to bypass your on-premises proxy so it directly connects to the Azure services. (See [Application Proxy connectors and proxy servers](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md).) |
-| Use Azure AD Application Proxy over Web Application Proxy | Use Azure AD Application Proxy for most on-premises scenarios. Web Application Proxy is only preferred in scenarios that require a proxy server for AD FS and where you can't use custom domains in Azure Active Directory. (See [Application Proxy migration](../app-proxy/application-proxy-migration.md).) |
+| Bypass your on-premises proxy | For easier maintenance, configure the connector to bypass your on-premises proxy so it directly connects to the Azure services. (See [Application Proxy connectors and proxy servers](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md).) |
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-download-logs.md
++
+ Title: How to download logs in Azure Active Directory | Microsoft Docs
+description: Learn how to download activity logs in Azure Active Directory.
+
+documentationcenter: ''
++
+editor: ''
+++++ Last updated : 05/02/2021++++++
+# How to: Download logs in Azure Active Directory
+
+The Azure Active Directory (Azure AD) portal gives you access to three types of activity logs:
+
+- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.
+- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.
+- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
+
+Azure AD stores the data in these logs for a limited amount of time. As an IT administrator, you can download your activity logs to have a long-term backup.
+
+This article explains how to download activity logs in Azure AD.
+
+## What you should know
+
+- In the Azure AD portal, you can find several entry points to the activity logs. For example, the **Activity** section on the [Users](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/MsGraphUsers) or [groups](https://portal.azure.com/#blade/Microsoft_AAD_IAM/GroupsManagementMenuBlade/AllGroups) page. However, there is only one location that provides you with an initially unfiltered view of the logs: the **Monitoring** section on the [Azure AD](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page.
+
+- Azure AD stores activity logs only for a specific period. For more information, see [How long does Azure AD store reporting data?](reference-reports-data-retention.md)
+
+- By downloading the logs, you can control for how long logs are stored.
+
+- You can download up to 250 000 records. If you want to download more data, use the reporting API.
+
+- Your download is based on the filter you have set.
+
+- Azure AD supports the following formats for your download:
+
+ - **CSV**
+
+ - **JSON**
+
+- The timestamps in the downloaded files are always based on UTC.
+++
+## What license do you need?
+
+The option to download the data of an activity log is available in all editions of Azure AD.
++
+## Who can do it?
+
+To access the audit logs, you need to be in one of the following roles:
+
+- Global Reader
+- Report Reader
+- Global Administrator
+- Security Administrator
+- Security Reader
++
+## Steps
+
+In Azure AD, you can access the download option in the toolbar of an activity log page.
+
+![Download log](./media/\howto-download-logs/download-log.png)
++
+**To download an activity log:**
+
+1. Navigate to the activity log view you care about:
+
+ - [The sign-ins log](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns)
+
+ - [The audit log](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns)
+
+ - [The provisioning log](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/ProvisioningEvents)
+
+
+2. **Add** the required filter.
+
+ ![Add filter](./media/\howto-download-logs/add-filter.png)
+
+3. **Download** the data.
++
+## Next steps
+
+- [Sign-ins logs in Azure AD](concept-sign-ins.md)
+- [Audit logs in Azure AD](concept-audit-logs.md)
active-directory Simplenexus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/simplenexus-tutorial.md
To configure Azure AD single sign-on with SimpleNexus, perform the following ste
`https://simplenexus.com/<companyname>` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [SimpleNexus Client support team](https://simplenexus.com/sn/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [SimpleNexus Client support team](https://www.simplenexus.com/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with SimpleNexus, perform the following ste
### Configure SimpleNexus Single Sign-On
-To configure single sign-on on **SimpleNexus** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [SimpleNexus support team](https://simplenexus.com/sn/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **SimpleNexus** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [SimpleNexus support team](https://www.simplenexus.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
When you click the SimpleNexus tile in the Access Panel, you should be automatic
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-driver.md
The Secrets Store CSI Driver for Kubernetes allows for the integration of Azure
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- Before you start, install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows).
+- Before you start, install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows) and the *aks-preview* extension.
## Features
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
az provider register --namespace Microsoft.ContainerService ```
+## Install the aks-preview CLI extension
+
+You also need the *aks-preview* Azure CLI extension version 0.5.10 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. If you already have the extension installed, update to the latest available version by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+ ## Create an AKS cluster with Secrets Store CSI Driver support > [!NOTE]
After learning how to use the CSI Secrets Store Driver with an AKS Cluster, see
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
[az-aks-create]: /cli/azure/aks#az_aks_create [key-vault-provider]: ../key-vault/general/key-vault-integrate-kubernetes.md [csi-storage-drivers]: ./csi-storage-drivers.md
api-management Api Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-faq.md
To learn how to configure an OAuth 2.0 authorization server with Active Director
API Management uses the [performance traffic routing method](../traffic-manager/traffic-manager-routing-methods.md#performance) in deployments to multiple geographic locations. Incoming traffic is routed to the closest API gateway. If one region goes offline, incoming traffic is automatically routed to the next closest gateway. Learn more about routing methods in [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md). ### Can I use an Azure Resource Manager template to create an API Management service instance?
-Yes. See the [Azure API Management Service](https://aka.ms/apimtemplate) quickstart templates.
+Yes. See the [Azure API Management Service](https://azure.microsoft.com/resources/templates/101-azure-api-management-create/) quickstart templates.
### Can I use a self-signed TLS/SSL certificate for a back end? Yes. This can be done through PowerShell or by directly submitting to the API. This will disable certificate chain validation and will allow you to use self-signed or privately-signed certificates when communicating from API Management to the back end services.
automation Automation Watchers Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-watchers-tutorial.md
Title: Track updated files with an Azure Automation watcher task
description: This article tells how to create a watcher task in the Azure Automation account to watch for new files created in a folder. -+ Last updated 12/17/2020
To complete this article, the following are required:
## Import a watcher runbook
-This article uses a watcher runbook called **Watch-NewFile** to look for new files in a directory. The watcher runbook retrieves the last known write time to the files in a folder and looks at any files newer than that watermark.
+This article uses a watcher runbook called **Watcher runbook that looks for new files in a directory** to look for new files in a directory. The watcher runbook retrieves the last known write time to the files in a folder and looks at any files newer than that watermark.
-You can download the runbook from the [Azure Automation GitHub organization](https://github.com/azureautomation).
+You can import this runbook into your Automation account from the portal using the following steps.
-1. Navigate to the Azure Automation GitHub organization page for [Watch-NewFile.ps1](https://github.com/azureautomation/watcher-action-that-processes-events-triggerd-by-a-watcher-runbook).
-2. To download the runbook from GitHub, select **Code** from the right-hand side of the page, and then select **Download ZIP** to download the whole code in a zip file.
-3. Extract the contents and [import the runbook](manage-runbooks.md#import-a-runbook-from-the-azure-portal).
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select the name of your Automation account from the list.
+1. In the left pane, select **Runbooks gallery** under **Process Automation**.
+1. Make sure **GitHub** is selected in the **Source** drop-down list.
+1. Search for **Watcher runbook**.
+1. Select **Watcher runbook that looks for new files in a directory**, and select **Import** on the details page.
+1. Give the runbook a name and optionally a description and click **OK** to import the runbook into your Automation account. You should see an **Import successful** message in a pane at the upper right of your window.
+1. The imported runbook appears in the list under the name you gave it when you select Runbooks from the left-hand pane.
+1. Click on the runbook, and on the runbook details page, select **Edit** and then click **Publish**. When prompted, click **Yes** to publish the runbook.
-You can also import this runbook into your Automation account from the portal using the following steps.
+You can also download the runbook from the [Azure Automation GitHub organization](https://github.com/azureautomation).
-1. Open your Automation account, and click on the Runbooks page.
-2. Click **Browse gallery** and under the **Source** drop-down list select **GitHub**.
-3. Search for **Watcher runbook**, select **Watcher runbook that looks for new files in a directory**, and click **Import**.
-4. Give the runbook a name and description and click **OK** to import the runbook into your Automation account.
-5. Select **Edit** and then click **Publish**. When prompted, click **Yes** to publish the runbook.
+1. Navigate to the Azure Automation GitHub organization page for [Watch-NewFile.ps1](https://github.com/azureautomation/watcher-runbook-that-looks-for-new-files-in-a-directory#watcher-runbook-that-looks-for-new-files-in-a-directory).
+1. To download the runbook from GitHub, select **Code** from the right-hand side of the page, and then select **Download ZIP** to download the whole code in a zip file.
+1. Extract the contents and [import the runbook](manage-runbooks.md#import-a-runbook-from-the-azure-portal).
## Create an Automation variable
An [Automation variable](./shared-resources/variables.md) is used to store the t
1. Select **Variables** under **Shared Resources** and click **+ Add a variable**. 1. Enter **Watch-NewFileTimestamp** for the name.
-1. Select DateTime for the type.
+1. Select **DateTime** for the type. It will default to the current date and time.
+
+ :::image type="content" source="./media/automation-watchers-tutorial/create-new-variable.png" alt-text="Create new variable blade.":::
+ 1. Click **Create** to create the Automation variable. ## Create an action runbook
-An action runbook is used in a watcher task to act on the data passed to it from a watcher runbook. You must import a predefined action runbook called **Process-NewFile** from the [Azure Automation GitHub organization](https://github.com/azureautomation).
+An action runbook is used in a watcher task to act on the data passed to it from a watcher runbook. You must import a predefined action runbook, either from the Azure portal of from the [Azure Automation GitHub organization](https://github.com/azureautomation).
-To create an action runbook:
+You can import this runbook into your Automation account from the Azure portal:
-1. Navigate to the Azure Automation GitHub organization page for [Process-NewFile.ps1](https://github.com/azureautomation/watcher-action-that-processes-events-triggerd-by-a-watcher-runbook).
-2. To download the runbook from GitHub, select **Code** from the right-hand side of the page, and then select **Download ZIP** to download the whole code in a zip file.
-3. Extract the contents and [import the runbook](manage-runbooks.md#import-a-runbook-from-the-azure-portal).
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select the name of your Automation account from the list.
+1. In the left pane, select **Runbooks gallery** under **Process Automation**.
+1. Make sure **GitHub** is selected in the **Source** drop-down list.
+1. Search for **Watcher action**, select **Watcher action that processes events triggered by a watcher runbook**, and click **Import**.
+1. Optionally, change the name of the runbook on the import page, and then click **OK** to import the runbook. You should see an **Import successful** message in the notification pane in the upper right-hand side of the browser.
+1. Go to your Automation Account page, and click on **Runbooks** on the left. Your new runbook should be listed under the name you gave it in the previous step. Click on the runbook, and on the runbook details page, select **Edit** and then click **Publish**. When prompted, click **Yes** to publish the runbook.
-You can also import this runbook into your Automation account from the Azure portal:
+To create an action runbook by downloading it from the [Azure Automation GitHub organization](https://github.com/azureautomation):
-1. Navigate to your Automation account and select **Runbooks** under **Process Automation**.
-1. Click **Browse gallery** and under the **Source** drop-down list select **GitHub**.
-1. Search for **Watcher action**, select **Watcher action that processes events triggered by a watcher runbook**, and click **Import**.
-1. Give the runbook a name and description and click **OK** to import the runbook into your Automation account.
-1. Select **Edit** and then click **Publish**. When prompted, click **Yes** to publish the runbook.
+1. Navigate to the Azure Automation GitHub organization page for [Process-NewFile.ps1](https://github.com/azureautomation/watcher-action-that-processes-events-triggerd-by-a-watcher-runbook).
+1. To download the runbook from GitHub, select **Code** from the right-hand side of the page, and then select **Download ZIP** to download the whole code in a zip file.
+1. Extract the contents and [import the runbook](manage-runbooks.md#import-a-runbook-from-the-azure-portal).
## Create a watcher task
In this step, you configure the watcher task referencing the watcher and action
1. Click **OK**, and then **Select** to return to the Watcher page. 1. Click **OK** to create the watcher task.
- ![Configure watcher action from UI](media/automation-watchers-tutorial/watchertaskcreation.png)
+ :::image type="content" source="./media/automation-watchers-tutorial/watchertaskcreation.png" alt-text="Configure watcher action from UI.":::
+ ## Trigger a watcher You must run a test as described below to ensure that the watcher task works as expected. 1. Remote into the Hybrid Runbook Worker.
-2. Open **PowerShell** and create a test file in the folder.
+1. Open **PowerShell** and create a test file in the folder.
```azurepowerShell-interactive New-Item -Name ExampleFile1.txt
Mode LastWriteTime Length Name
1. Click on **View watcher streams** under **Streams** to see that the watcher has found the new file and started the action runbook. 1. To see the action runbook jobs, click on **View watcher action jobs**. Each job can be selected to view the details of the job.
- ![Watcher action jobs from UI](media/automation-watchers-tutorial/WatcherActionJobs.png)
+ :::image type="content" source="./media/automation-watchers-tutorial/WatcherActionJobs.png" alt-text="Watcher action jobs from UI.":::
+ The expected output when the new file is found can be seen in the following example:
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
Write-Host "Press [ENTER] to continue..."
To learn about adding feature flag and Key Vault reference to an App Configuration store, check below ARM template examples. -- [101-app-configuration-store-ff](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-ff)-- [101-app-configuration-store-keyvaultref](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-keyvaultref)
+- [101-app-configuration-store-ff](https://azure.microsoft.com/resources/templates/101-app-configuration-store-ff/)
+- [101-app-configuration-store-keyvaultref](https://azure.microsoft.com/resources/templates/101-app-configuration-store-keyvaultref/)
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/privacy-data-collection-and-reporting.md
+
+ Title: Data collection and reporting | Azure Arc enabled data services
+description: Explains the type of data that is transmitted by Arc enabled Data services to Microsoft.
++++++ Last updated : 04/27/2021+++
+# Azure Arc data services data collection and reporting
+
+This article describes the data that Azure Arc enabled data services transmits to Microsoft.
++
+## Related products
+
+Azure Arc enabled data services may use some or all of the following products:
+
+- SQL MI ΓÇô Azure Arc
+- PostgreSQL Hyperscale ΓÇô Azure Arc
+- Azure Data Studio
+- Azure CLI (az)
+- Azure Data CLI (`azdata`)
+
+## Directly connected
+
+When a cluster is configured to be directly connected to Azure, some data is automatically transmitted to Microsoft.
+
+The following table describes the type of data, how it is sent, and requirement.
+
+|Data category|What data is sent?|How is it sent?|Is it required?
+|:-|:-|:-|:-|
+|Operational Data|Metrics and logs|Automatic, when configured to do so|No
+Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Automatic |Yes
+Diagnostics|Diagnostic information for troubleshooting purposes|Manually exported and provided to Microsoft Support|Only for the scope of troubleshooting and follows the standard [privacy policies](https://privacy.microsoft.com/privacystatement)
+Customer Experience Improvement Program (CEIP)|[CEIP summary](/sql-server/usage-and-diagnostic-data-configuration-for-sql-server)|Automatic, if allowed|No
+
+## Indirectly connected
+
+When a cluster not configured to be directly connected to Azure, it does not automatically transmit operational, or billing and inventory data to Microsoft. To transmit data to Microsoft, you need to configure the export.
+
+The following table describes the type of data, how it is sent, and requirement.
+
+|Data category|What data is sent?|How is it sent?|Is it required?
+|:-|:-|:-|:-|
+|Operational Data|Metrics and logs|Manual|No
+Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Manual |Yes
+Diagnostics|Diagnostic information for troubleshooting purposes|Manually exported and provided to Microsoft Support|Only for the scope of troubleshooting and follows the standard [privacy policies](https://privacy.microsoft.com/privacystatement)
+Customer Experience Improvement Program (CEIP)|[CEIP summary](/sql-server/usage-and-diagnostic-data-configuration-for-sql-server)|Automatic, if allowed|No
+
+## Detailed description of data
+
+This section provides more details about the information included with the Azure Arc enabled data services transmits to Microsoft.
+
+### Operational data
+
+Operational data is collected for all database instances and for the Arc enabled data services platform itself. There are two types of operational data:
+
+- Metrics ΓÇô Performance and capacity related metrics, which are collected to an Influx DB provided as part of Arc enabled data services. You can view these metrics in the provided Grafana dashboard.
+
+- Logs ΓÇô logs emitted by all components including failure, warning, and informational events are collected to an Elasticsearch database provided as part of Arc enabled data services. You can view the logs in the provided Kibana dashboard.
+
+The operational data stored locally requires built in administrative privileges to view it in Grafana/Kibana.
+
+The operational data does not leave yous environment unless you chooses to export/upload (indirect connected mode) or automatically send (directly connected mode) the data to Azure Monitor/Log Analytics. The data goes into a Log Analytics workspace, which you control.
+
+If the data is sent to Azure Monitor or Log Analytics, you can choose which Azure region or datacenter the Log Analytics workspace resides in. After that, access to view or copy it from other locations can be controlled by you.
+
+### Billing and inventory data
+
+Billing data is used for purposes of tracking usage that is billable. This data is essential for running of the service and needs to be transmitted manually or automatically in all modes.
+
+Every database instance and the data controller itself will be reflected in Azure as an Azure resource in Azure Resource Manager.
+
+There are three resource types:
+
+- Arc enabled SQL Managed Instance
+- Arc enabled PostgreSQL Hyperscale server group
+- Arc enabled SQL Server
+- Data controller
+
+The following sections show the properties, types, and descriptions that are collected and stored about each type of resource:
+
+### Arc enabled SQL Server
+- SQL Server edition.
+ - `string: Edition`
+- Resource ID of the container resource (Azure Arc for Servers).
+ - `string: ContainerResourceId`
+- Time when the resource was created.
+ - `string: CreateTime`
+- The number of logical processors used by the SQL Server instance.
+ - `string: VCore`
+- Cloud connectivity status.
+ - `string: Status`
+- SQL Server update level.
+ - `string: PatchLevel`
+- SQL Server collation.
+ - `string: Collation`
+- SQL Server current version.
+ - `string: CurrentVersion`
+- SQL Server instance name.
+ - `string: InstanceName`
+- Dynamic TCP ports used by SQL Server.
+ - `string: TcpDynamicPorts`
+- Static TCP ports used by SQL Server.
+ - `string: TcpStaticPorts`
+- SQL Server product ID.
+ - `string: ProductId`
+- SQL Server provisioning state.
+ - `string: ProvisioningState`
+
+### Data controller
+
+- Location information
+ - `public OnPremiseProperty OnPremiseProperty`
+- The raw Kubernetes information (`kubectl get datacontroller`)
+ - `object: K8sRaw`
+- Last uploaded date from on-premises cluster.
+ - `System.DateTime: LastUploadedDate`
+- Data controller state
+ - `string: ProvisioningState`
+
+### PostgreSQL Hyperscale Server Group
+
+- The data controller ID
+ - `string: DataControllerId`
+- The instance admin name
+ - `string: Admin`
+- Username and password for basic authentication
+ - `public: BasicLoginInformation BasicLoginInformation`
+- The raw Kubernetes information (`kubectl get postgres12`)
+ - `object: K8sRaw`
+- Last uploaded date from on premises cluster.
+ - `System.DateTime: LastUploadedDate`
+- Group provisioning state
+ - `string: ProvisioningState`
+
+### SQL Managed Instance
+
+- The managed instance ID
+ - `public string: DataControllerId`
+- The instance admin username
+ - `string: Admin`
+- The instance start time
+ - `string: StartTime`
+- The instance end time
+ - `string: EndTime`
+- The raw kubernetes information (`kubectl get sqlmi`)
+ - `object: K8sRaw`
+- Username and password for basic authentication.
+ - `public: BasicLoginInformation BasicLoginInformation`
+- Last uploaded date from on-premises cluster.
+ - `public: System.DateTime LastUploadedDate`
+- SQL managed instance provisioning state
+ - `public string: ProvisioningState`
+
+### Examples
+
+Example of resource inventory data JSON document that is sent to Azure to create Azure resources in your subscription.
+
+```json
+{
+
+ "customObjectName": "<resource type>-2020-29-5-23-13-17-164711",
+
+ "uid": "4bc3dc6b-9148-4c7a-b7dc-01afc1ef5373",
+
+ "instanceName": "sqlInstance001",
+
+ "instanceNamespace": "arc",
+
+ "instanceType": "<resource>",
+
+ "location": "eastus",
+
+ "resourceGroupName": "production-resources",
+
+ "subscriptionId": "<subscription_id>",
+
+ "isDeleted": false,
+
+ "externalEndpoint": "32.191.39.83:1433",
+
+ "vCores": "2",
+
+ "createTimestamp": "05/29/2020 23:13:17",
+
+ "updateTimestamp": "05/29/2020 23:13:17"
+
+ }
+```
+
+
+
+Billing data captures the start time (ΓÇ£createdΓÇ¥) and end time (ΓÇ£deletedΓÇ¥) of a given instance.as well as any start and time whenever a change in the number of cores available to a given instance (ΓÇ£core limitΓÇ¥) happens.
+
+```json
+{
+
+ "requestType": "usageUpload",
+
+ "clusterId": "4b0917dd-e003-480e-ae74-1a8bb5e36b5d",
+
+ "name": "DataControllerTestName",
+
+ "subscriptionId": "<subscription_id>",
+
+ "resourceGroup": "production-resources",
+
+ "location": "eastus",
+
+ "uploadRequest": {
+
+ "exportType": "usages",
+
+ "dataTimestamp": "2020-06-17T22:32:24Z",
+
+ "data": "[{\"name\":\"sqlInstance001\",
+
+ \"namespace\":\"arc\",
+
+ \"type\":\"<resource type>\",
+
+ \"eventSequence\":1,
+
+ \"eventId\":\"50DF90E8-FC2C-4BBF-B245-CB20DC97FF24\",
+
+ \"startTime\":\"2020-06-17T19:11:47.7533333\",
+
+ \"endTime\":\"2020-06-17T19:59:00\",
+
+ \"quantity\":1,
+
+ \"id\":\"<subscription_id>\"}]",
+
+ "signature":"MIIE7gYJKoZIhvcNAQ...2xXqkK"
+
+```
+
+### Diagnostic data
+
+In support situations, you may be asked to provide database instance logs, Kubernetes logs, and other diagnostic logs. The support team will provide a secure location for you to upload to. Dynamic management views (DMVs) may also provide diagnostic data. The DMVs or queries used could contain database schema metadata details but typically not customer data. Diagnostic data does not contain any passwords, cluster IPs or individually identifiable data. These are cleaned and the logs are made anonymous for storage when possible. They are not transmitted automatically and administrator has to manually upload them.
+
+|Field name |Notes |
+|||
+|Error logs |Log files capturing errors may contain customer or personal data (see below) are restricted and shared by user |
+|DMVsΓÇ» |Dynamic management views can contain query and query plans but are restricted and shared by user |
+|Views |Views can contain customer data but are restricted and shared only by userΓÇ» |
+|Crash dumps ΓÇô customer data | Maximum 30-day retention of crash dumps ΓÇô may contain access control data <br/><br/> Statistics objects, data values within rows, query texts could be in customer crash dumps |
+|Crash dumps ΓÇô personal data | Machine, logins/ user names, emails, location information, customer identification ΓÇô require user consent to be included |
+
+### Customer experience improvement program (CEIP) (Telemetry)
+
+Telemetry is used to track product usage metrics and environment information.
+See [SQL Server privacy supplement](/sql/sql-server/sql-server-privacy/).
+
+## Next steps
+[Upload usage data to Azure Monitor](upload-usage-data.md)
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
The query results are transformed into a number that is compared against the thr
### Frequency > [!NOTE]
-> There are currently no additional charges for 1-minute frequency log alerts. Pricing for features that are in preview will be announced in the future and a notice provided prior to start of billing. Should you choose to continue using 1-minute frequency log alerts after the notice period, you will be billed at the applicable rate.
+> There are currently no additional charges for 1-minute frequency log alerts preview. Pricing for features that are in preview will be announced in the future and a notice provided prior to start of billing. Should you choose to continue using 1-minute frequency log alerts after the notice period, you will be billed at the applicable rate.
-The interval in which the query is run. Can be set from 1 minute to one day. Must be equal to or less than the [query time range](#query-time-range) to not miss log records.
+The interval in which the query is run. Can be set from a minute to a day. Must be equal to or less than the [query time range](#query-time-range) to not miss log records.
For example, if you set the time period to 30 minutes and frequency to 1 hour. If the query is run at 00:00, it returns records between 23:30 and 00:00. The next time the query would run is 01:00 that would return records between 00:30 and 01:00. Any records created between 00:00 and 00:30 would never be evaluated.
-To use 1-minute frequency alerts you need to set a property via the API. When creating new or updating existing log alert rules in API Version `2020-05-01-preview` - in `properties` section, add `evaluationFrequency` with value `PT1M` of type `String`. When creating new or updating existing log alert rules in API Version `2018-04-16` - in `schedule` section, add `frequencyInMinutes` with value `1` of type `Int`.
- ### Number of violations to trigger alert You can specify the alert evaluation period and the number of failures needed to trigger an alert. Allowing you to better define an impact time to trigger an alert.
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-charts.md
To view multiple metrics on the same chart, first [create a new chart](./metrics
> [!NOTE] > Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. >
-> In these cases, consider using multiple charts instead. In the metrics explorer, select **Add chart** to create a new chart.
+> In these cases, consider using multiple charts instead. In the metrics explorer, select **New chart** to create a new chart.
+
+![Screenshot showing multiple metrics.](./media/metrics-charts/multiple-metrics-chart.png)
### Multiple charts
-To create another chart that uses a different metric, select **Add chart**.
+To create another chart that uses a different metric, select **New chart**.
To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then choose **Move up**, **Move down**, or **Delete**.
+![Screenshot showing multiple charts.](./media/metrics-charts/multiple-charts.png)
+ ## Time range controls In addition to changing the time range using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can also pan and zoom using the controls in the chart area.
If you don't see any data on your chart, review the following troubleshooting in
## Next steps
-To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../app/tutorial-app-dashboards.md).
+To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../app/tutorial-app-dashboards.md).
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
For important additional information, see [Monitoring Agents Overview](../agents
|qpu_metric|Yes|QPU|Count|Average|QPU. Range 0-100 for S1, 0-200 for S2 and 0-400 for S4|ServerResourceType| |QueryPoolBusyThreads|Yes|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|ServerResourceType| |QueryPoolIdleThreads|Yes|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|ServerResourceType|
-|QueryPoolJobQueueLength|Yes|Threads: Query pool job queue lengt|Count|Average|Number of jobs in the queue of the query thread pool.|ServerResourceType|
+|QueryPoolJobQueueLength|Yes|Threads: Query pool job queue length|Count|Average|Number of jobs in the queue of the query thread pool.|ServerResourceType|
|Quota|Yes|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|ServerResourceType| |QuotaBlocked|Yes|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|ServerResourceType| |RowsConvertedPerSec|Yes|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|ServerResourceType|
For important additional information, see [Monitoring Agents Overview](../agents
|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, OperationResult| |WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|Replica| -
-## Microsoft.ServiceFabricMesh/applications
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ActualCpu|No|ActualCpu|Count|Average|Actual CPU usage in milli cores|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
-|ActualMemory|No|ActualMemory|Bytes|Average|Actual memory usage in MB|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
-|AllocatedCpu|No|AllocatedCpu|Count|Average|Cpu allocated to this container in milli cores|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
-|AllocatedMemory|No|AllocatedMemory|Bytes|Average|Memory allocated to this container in MB|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
-|ApplicationStatus|No|ApplicationStatus|Count|Average|Status of Service Fabric Mesh application|ApplicationName, Status|
-|ContainerStatus|No|ContainerStatus|Count|Average|Status of the container in Service Fabric Mesh application|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName, Status|
-|CpuUtilization|No|CpuUtilization|Percent|Average|Utilization of CPU for this container as percentage of AllocatedCpu|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
-|MemoryUtilization|No|MemoryUtilization|Percent|Average|Utilization of CPU for this container as percentage of AllocatedCpu|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
-|RestartCount|No|RestartCount|Count|Average|Restart count of a container in Service Fabric Mesh application|ApplicationName, Status, ServiceName, ServiceReplicaName, CodePackageName|
-|ServiceReplicaStatus|No|ServiceReplicaStatus|Count|Average|Health Status of a service replica in Service Fabric Mesh application|ApplicationName, Status, ServiceName, ServiceReplicaName|
-|ServiceStatus|No|ServiceStatus|Count|Average|Health Status of a service in Service Fabric Mesh application|ApplicationName, Status, ServiceName|
-- ## Microsoft.SignalRService/SignalR |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-troubleshoot.md
Click the **Status** to drill in to see logs and further details, which may help
## Not collecting state The monitoring machine has a state of *Not collecting* if there's no data in *InsightsMetrics* for SQL in the last 10 minutes.
+> [!NOTE]
+> Please verify that your are trying to collect data from a [supported version of SQL](sql-insights-overview.md#supported-versions). For example, attempting to collect data with a valid profile and connection string but from an unsupported version of Azure SQL Database will result in a not collecting state.
+ SQL insights uses the following query to retrieve this information: ```kusto
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data from a Log Analyti
- Supported tables are currently limited those specific in the [supported tables](#supported-tables) section below. For example, custom log tables aren't supported currently. - If the data export rule includes an unsupported table, the operation will succeed, but no data will be exported for that table until table gets supported. - If the data export rule includes a table that doesn't exist, it will fail with error ```Table <tableName> does not exist in the workspace```.-- Your Log Analytics workspace can be in any region except for the following:
- - Azure Government regions
- - Japan West
- - Brazil south east
- - Norway East
- - UAE North
-- You can have up to 10 enabled rules in your workspace. Additional rules above 10 can be created in disable state.
+- Data export will be available in all regions, but currently not available in the following: Azure Government regions, Japan West, Brazil south east, Norway East, Norway West, UAE North, UAE Central, Australia Central 2, Switzerland North, Switzerland West, Germany West Central, South India, France South, Japan West
+- You can define up to 10 enabled rules in your workspace. Additional rules are allowed but in disable state.
- Destination must be unique across all export rules in your workspace. - The destination storage account or event hub must be in the same region as the Log Analytics workspace. - Names of tables to be exported can be no longer than 60 characters for a storage account and no more than 47 characters to an event hub. Tables with longer names will not be exported.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.SerialConsole - [registered](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) | | Microsoft.ServiceBus | [Service Bus](/azure/service-bus/) | | Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) |
-| Microsoft.ServiceFabricMesh | [Service Fabric Mesh](../../service-fabric-mesh/index.yml) |
| Microsoft.Services | core | | Microsoft.SignalRService | [Azure SignalR Service](../../azure-signalr/index.yml) | | Microsoft.SoftwarePlan | License |
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
The following limitations apply to tags:
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]
- > Currently, Azure DNS zones and Traffic Manager services also don't allow the use of spaces in the tag.
+ > * Azure DNS zones and Traffic Manager doesn't support the use of spaces in the tag or a tag that starts with a number.
>
- > Azure Front Door doesn't support the use of `#` in the tag name.
+ > * Azure Front Door doesn't support the use of `#` in the tag name.
>
- > Azure Automation and Azure CDN only support 15 tags on resources.
+ > * Azure Automation and Azure CDN only support 15 tags on resources.
## Next steps
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
Title: "Tutorial: Build a Blazor Server chat app - Azure SignalR"
-description: In this tutorial, you learn how to build and modify a Blazor Server app with Azure SignalR Service
+ Title: 'Tutorial: Build a Blazor Server chat app - Azure SignalR'
+description: In this tutorial, you learn how to build and modify a Blazor Server app with Azure SignalR Service.
# Tutorial: Build a Blazor Server chat app This tutorial shows you how to build and modify a Blazor Server app. You'll learn how to:-
-> [!div class="checklist"]
-> * Build a simple chat room with Blazor Server app.
-> * Modify Razor components.
-> * Use event handling and data binding in components.
-> * Quick deploy to Azure App Service in Visual Studio.
-> * Migrate local SignalR to Azure SignalR Service.
+> [!div class="checklist"]
+> * Build a simple chat room with the Blazor Server app template.
+> * Work with Razor components.
+> * Use event handling and data binding in Razor components.
+> * Quick-deploy to Azure App Service in Visual Studio.
+> * Migrate from local SignalR to Azure SignalR Service.
## Prerequisites+ * Install [.NET Core 3.0 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.0) (Version >= 3.0.100) * Install [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) (Version >= 16.3)
-> Visual Studio 2019 Preview version also works which is releasing with latest Blazor Server app template targeting newer .Net Core version.
+ [Having issues? Let us know.](https://aka.ms/asrs/qsblazor) ## Build a local chat room in Blazor Server app
-From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web app publish process, and manage dependencies between web app and SignalR service would be much more convenient. You can experience working on local SignalR in dev local environment and working on Azure SignalR Service for Azure App Service at the same time without any code changes.
+Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built into the web application publish process to make managing the dependencies between the web app and SignalR service much more convenient. You can work in a local SignalR instance in a local development environment and work in Azure SignalR Service for Azure App Service at the same time without any code changes.
-1. Create a chat Blazor app
+1. Create a Blazor chat app:
+ 1. In Visual Studio, choose **Create a new project**.
+ 1. Select **Blazor App**.
+ 1. Name the application and choose a folder.
+ 1. Select the **Blazor Server App** template.
+
+ > [!NOTE]
+ > Make sure that you've already installed .NET Core SDK 3.0+ to enable Visual Studio to correctly recognize the target framework.
- In Visual Studio, choose Create a new project -> Blazor App -> (name the app and choose a folder) -> Blazor Server App. Make sure you've already installed .NET Core SDK 3.0+ to enable Visual Studio correctly recognize the target framework.
-
- [ ![In Create a new project, the Blazor App templates are selected.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-create.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-create.png#lightbox)
+ [ ![In Create a new project, select the Blazor app template.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-create.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-create.png#lightbox)
- Or run cmd
- ```dotnetcli
- dotnet new blazorserver -o BlazorChat
- ```
+ 5. You also can create a project by running the [`dotnet new`](/dotnet/core/tools/dotnet-new) command in the .NET CLI:
+
+ ```dotnetcli
+ dotnet new blazorserver -o BlazorChat
+ ```
-1. Add a `BlazorChatSampleHub.cs` file to implement `Hub` for chat.
+1. Add a new C# file called `BlazorChatSampleHub.cs` and create a new class `BlazorSampleHub` deriving from the `Hub` class for the chat app. For more information on creating hubs, see [Create and Use Hubs](/aspnet/core/signalr/hubs#create-and-use-hubs).
```cs using System;
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
} ```
-1. Add an endpoint for the hub in `Startup.Configure()`.
+1. Add an endpoint for the hub in the `Startup.Configure()` method.
```cs app.UseEndpoints(endpoints =>
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
}); ```
-1. Install `Microsoft.AspNetCore.SignalR.Client` package to use SignalR client.
+1. Install the `Microsoft.AspNetCore.SignalR.Client` package to use the SignalR client.
```dotnetcli dotnet add package Microsoft.AspNetCore.SignalR.Client --version 3.1.7 ```
-1. Create `ChatRoom.razor` under `Pages` folder to implement SignalR client. Follow steps below or simply copy the [ChatRoom.razor](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BlazorChat/Pages/ChatRoom.razor).
+1. Create a new [Razor component](/aspnet/core/blazor/components/) called `ChatRoom.razor` under the `Pages` folder to implement the SignalR client. Follow the steps below or use the [ChatRoom.razor](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BlazorChat/Pages/ChatRoom.razor) file.
- 1. Add page link and reference.
+ 1. Add the [`@page`](/aspnet/core/mvc/views/razor#page) directive and the using statements. Use the [`@inject`](/aspnet/core/mvc/views/razor#inject) directive to inject the [`NavigationManager`](/aspnet/core/blazor/fundamentals/routing#uri-and-navigation-state-helpers) service.
```razor @page "/chatroom"
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
@using Microsoft.AspNetCore.SignalR.Client; ```
- 1. Add code to new SignalR client to send and receive messages.
+ 1. In the `@code` section, add the following members to the new SignalR client to send and receive messages.
```razor @code {
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
} ```
- 1. Add rendering part before `@code` for UI to interact with SignalR client.
+ 1. Add the UI markup before the `@code` section to interact with the SignalR client.
```razor <h1>Blazor SignalR Chat Sample</h1>
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
} ```
-1. Update `NavMenu.razor` to insert a entry menu for the chat room under `NavMenuCssClass` like rest.
+1. Update the `NavMenu.razor` component to insert a new `NavLink` component to link to the chat room under `NavMenuCssClass`.
```razor <li class="nav-item px-3">
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
</li> ```
-1. Update `site.css` to optimize for chat area bubble views. Append below code in the end.
+1. Add a few CSS classes to the `site.css` file to style the UI elements in the chat page.
```css /* improved for chat text box */
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
} ```
-1. Click <kbd>F5</kbd> to run the app. You'll be able to chat like below.
+1. Press <kbd>F5</kbd> to run the app. Now, you can initiate the chat:
[ ![An animated chat between Bob and Alice is shown. Alice says Hello, Bob says Hi.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat.gif) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat.gif#lightbox)
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
## Publish to Azure
- So far, the Blazor App is working on local SignalR and when deploy to Azure App Service, it's suggested to use [Azure SignalR Service](/aspnet/core/signalr/scale#azure-signalr-service) which allows for scaling up a Blazor Server app to a large number of concurrent SignalR connections. In addition, the SignalR service's global reach and high-performance data centers significantly aid in reducing latency due to geography.
+When you deploy the Blazor app to Azure App Service, we recommend that you use [Azure SignalR Service](/aspnet/core/signalr/scale#azure-signalr-service). Azure SingalR Service allows for scaling up a Blazor Server app to a large number of concurrent SignalR connections. In addition, the SignalR service's global reach and high-performance datacenters significantly aid in reducing latency due to geography.
> [!IMPORTANT]
-> In Blazor Server app, UI states are maintained at server side which means server sticky is required in this case. If there's single app server, server sticky is ensured by design. However, if there're multiple app servers, there's a chance that client negotiation and connection may go to different servers and leads to UI errors in Blazor app. So you need to enable server sticky like below in `appsettings.json`:
+> In a Blazor Server app, UI states are maintained on the server side, which means a sticky server session is required to preserve state. If there is a single app server, sticky sessions are ensured by design. However, if there are multiple app servers, there are chances that the client negotiation and connection may go to different servers which may lead to an inconsistent UI state management in a Blazor app. Hence, it is recommended to enable sticky server sessions as shown below in *appsettings.json*:
+>
> ```json > "Azure:SignalR:ServerStickyMode": "Required" > ```
-1. Right click the project and navigate to `Publish`.
+1. Right-click the project and go to **Publish**. Use the following settings:
+ * **Target**: Azure
+ * **Specific target**: All types of **Azure App Service** are supported.
+ * **App Service**: Create or select the App Service instance.
- * Target: Azure
- * Specific target: All types of **Azure App Service** are supported.
- * App Service: create a new one or select existing app service.
+ [ ![The animation shows selection of Azure as target, and then Azure App Serice as specific target.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-profile.gif) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-profile.gif#lightbox)
- [ ![The animation shows selection of Azure as Target, and then Azure App Serice as Specific target.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-profile.gif) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-profile.gif#lightbox)
+1. Add the Azure SignalR Service dependency.
-1. Add Azure SignalR Service dependency
-
- After publish profile created, you can see a recommended message under **Service Dependencies**. Click **Configure** to create new or select existing Azure SignalR Service in the panel.
+ After the creation of the publish profile, you can see a recommendation message to add Azure SignalR service under **Service Dependencies**. Select **Configure** to create a new or select an existing Azure SignalR Service in the pane.
[ ![On Publish, the link to Configure is highlighted.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency.png#lightbox)
- The service dependency will do things below to enable your app automatically switch to Azure SignalR Service when on Azure.
+ The service dependency will carry out the following activities to enable your app to automatically switch to Azure SignalR Service when on Azure:
* Update [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration) to use Azure SignalR Service.
- * Add Azure SignalR Service NuGet package reference.
- * Update profile properties to save the dependency settings.
- * Configure secrets store depends on your choice.
- * Add `appsettings` configuration to make your app target selected Azure SignalR Service.
+ * Add the Azure SignalR Service NuGet package reference.
+ * Update the profile properties to save the dependency settings.
+ * Configure the secrets store as per your choice.
+ * Add the configuration in *appsettings.json* to make your app target Azure SignalR Service.
- [ ![On Summary of changes, the check boxes are used to select all dependencies.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency-summary.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency-summary.png#lightbox)
+ [ ![On Summary of changes, the checkboxes are used to select all dependencies.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency-summary.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency-summary.png#lightbox)
-1. Publish the app
+1. Publish the app.
- Now it's ready to publish. And it'll auto browser the page after publishing completes.
+ Now the app is ready to be published. Upon the completion of the publishing process, the app automatically launches in a browser.
+
> [!NOTE]
- > It may not immediately work in the first time visiting page due to Azure App Service deployment start up latency and try refresh the page to give some delay.
- > Besides, you can use browser debugger mode with <kbd>F12</kbd> to validate the traffic has already redirect to Azure SignalR Service.
+ > The app may require some time to start due to the Azure App Service deployment start latency. You can use the browser debugger tools (usually by pressing <kbd>F12</kbd>) to ensure that the traffic has been redirected to Azure SignalR Service.
[ ![Blazor SignalR Chat Sample has a text box for your name, and a Chat! button to start a chat.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-azure.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-azure.png#lightbox) [Having issues? Let us know.](https://aka.ms/asrs/qsblazor)
-## Further topic: Enable Azure SignalR Service in local development
+## Enable Azure SignalR Service for local development
-1. Add reference to Azure SignalR SDK
+1. Add a reference to the Azure SignalR SDK using the following command.
```dotnetcli dotnet add package Microsoft.Azure.SignalR --version 1.5.1 ```
-1. Add a call to Azure SignalR Service in `Startup.ConfigureServices()`.
+1. Add a call to `AddAzureSingalR()` in `Startup.ConfigureServices()` as demonstrated below.
```cs public void ConfigureServices(IServiceCollection services)
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
} ```
-1. Configure Azure SignalR Service `ConnectionString` either in `appsetting.json` or with [Secret Manager](/aspnet/core/security/app-secrets?tabs=visual-studio#secret-manager) tool
+1. Configure the Azure SignalR Service connection string either in *appsettings.json* or by using the [Secret Manager](/aspnet/core/security/app-secrets?tabs=visual-studio#secret-manager) tool.
> [!NOTE]
-> Step 2 can be replaced by using [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration) to SignalR SDK.
->
-> 1. Add configuration to turn on Azure SignalR Service in `appsetting.json`
-> ```js
-> "Azure": {
-> "SignalR": {
-> "Enabled": true,
-> "ConnectionString": <your-connection-string>
-> }
-> }
+> Step 2 can be replaced with configuring [Hosting Startup Assemblies](/aspnet/core/fundamentals/host/platform-specific-configuration) to use the SignalR SDK.
+>
+> 1. Add the configuration to turn on Azure SignalR Service in *appsettings.json*:
+>
+> ```json
+> "Azure": {
+> "SignalR": {
+> "Enabled": true,
+> "ConnectionString": <your-connection-string>
+> }
+> }
+>
> ```
->
-> 1. Assign hosting startup assembly to use Azure SignalR SDK. Edit `launchSettings.json` and add a configuration like below inside `environmentVariables`.
-> ```js
+>
+> 1. Configure the hosting startup assembly to use the Azure SignalR SDK. Edit *launchSettings.json* and add a configuration like the following example inside `environmentVariables`:
+>
+> ```json
> "environmentVariables": {
-> ...,
+> ...,
> "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "Microsoft.Azure.SignalR" > }
-> ```
+>
+> ```
+>
[Having issues? Let us know.](https://aka.ms/asrs/qsblazor)
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
To clean up the resources created in this tutorial, delete the resource group using the Azure portal.
+## Additional resources
+
+* [ASP.NET Core Blazor](/aspnet/core/blazor)
+ ## Next steps
-In this tutorial, you'll learn how to:
+In this tutorial, you learned how to:
> [!div class="checklist"]
-> * Build a simple chat room with Blazor Server app.
-> * Modify Razor components.
-> * Use event handling and data binding in components.
-> * Quick deploy to Azure App Service in Visual Studio.
-> * Migrate local SignalR to Azure SignalR Service.
+> * Build a simple chat room with the Blazor Server app template.
+> * Work with Razor components.
+> * Use event handling and data binding in Razor components.
+> * Quick-deploy to Azure App Service in Visual Studio.
+> * Migrate from local SignalR to Azure SignalR Service.
-Read more about high availability.
+Read more about high availability:
> [!div class="nextstepaction"] > [Resiliency and disaster recovery](signalr-concept-disaster-recovery.md)-
-## Additional resources
-
-* [ASP.NET Core Blazor](/aspnet/core/blazor)
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
Previously updated : 03/17/2021 Last updated : 05/02/2021 # Auditing for Azure SQL Database and Azure Synapse Analytics
Extended policy with WHERE clause support for additional filtering:
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../azure-resource-manager/management/overview.md) templates, as shown in these examples: -- [Deploy an Azure SQL Database with Auditing enabled to write audit logs to Azure Blob storage account](https://github.com/Azure/azure-quickstart-templates/tree/master/201-sql-auditing-server-policy-to-blob-storage)-- [Deploy an Azure SQL Database with Auditing enabled to write audit logs to Log Analytics](https://github.com/Azure/azure-quickstart-templates/tree/master/201-sql-auditing-server-policy-to-oms)-- [Deploy an Azure SQL Database with Auditing enabled to write audit logs to Event Hubs](https://github.com/Azure/azure-quickstart-templates/tree/master/201-sql-auditing-server-policy-to-eventhub)
+- [Deploy an Azure SQL Database with Auditing enabled to write audit logs to Azure Blob storage account](https://azure.microsoft.com/resources/templates/sql-auditing-server-policy-to-blob-storage/)
+- [Deploy an Azure SQL Database with Auditing enabled to write audit logs to Log Analytics](https://azure.microsoft.com/resources/templates/sql-auditing-server-policy-to-oms/)
+- [Deploy an Azure SQL Database with Auditing enabled to write audit logs to Event Hubs](https://azure.microsoft.com/resources/templates/sql-auditing-server-policy-to-eventhub/)
> [!NOTE]
-> The linked samples are on an external public repository and are provided 'as is', without warranty, and are not supported under any Microsoft support program/service.
+> The linked samples are on an external public repository and are provided 'as is', without warranty, and are not supported under any Microsoft support program/service.
bastion Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/troubleshoot.md
This article shows you how to troubleshoot Azure Bastion.
**A:** If you create and apply an NSG to *AzureBastionSubnet*, make sure you have added the required rules to the NSG. For a list of required rules, see [Working with NSG access and Azure Bastion](./bastion-nsg.md). If you do not add these rules, the NSG creation/update will fail.
-An example of the NSG rules is available for reference in the [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-azure-bastion-nsg).
+An example of the NSG rules is available for reference in the [quickstart template](https://azure.microsoft.com/resources/templates/101-azure-bastion-nsg/).
For more information, see [NSG guidance for Azure Bastion](bastion-nsg.md). ## <a name="sshkey"></a>Unable to use my SSH key with Azure Bastion
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
3. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
-4. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), [AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
+4. Upload your Package (.cspkg) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
5. (Optional) Create a key vault and upload the certificates.
This tutorial explains how to create a Cloud Service (extended support) deployme
- Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 4/15/2021 Last updated : 4/30/2021
The following tables show the Microsoft Security Response Center (MSRC) updates
## April 2021 Guest OS
->[!NOTE]
-
->The April Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 21-04 | [5001342] | Latest Cumulative Update(LCU) | 6.30 | Apr 13, 2021 |
-| Rel 21-04 | [4580325] | Flash update | 3.96, 4.89, 5.54, 6.30 | Oct 13, 2020 |
-| Rel 21-04 | [5000800] | IE Cumulative Updates | 2.109, 3.96, 4.89 | Mar 9, 2021 |
-| Rel 21-04 | [5001347] | Latest Cumulative Update(LCU) | 5.54 | Apr 13, 2021 |
-| Rel 21-04 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.109 | Oct 13, 2020 |
-| Rel 21-04 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.109 | Oct 13, 2020 |
-| Rel 21-04 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.89 | Oct 13, 2020 |
-| Rel 21-04 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.89 | Oct 13, 2020 |
-| Rel 21-04 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.96 | Oct 13, 2020 |
-| Rel 21-04 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.96 | Oct 13, 2020 |
-| Rel 21-04 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.30 | Feb 9, 2021 |
-| Rel 21-04 | [5001335] | Monthly Rollup  | 2.109 | Mar 9, 2021 |
-| Rel 21-04 | [5001387] | Monthly Rollup  | 3.96 | Apr 13, 2021 |
-| Rel 21-04 | [5001382] | Monthly Rollup  | 4.89 | Apr 13, 2021 |
-| Rel 21-04 | [5001401] | Servicing Stack update  | 3.96 | Apr 13, 2021 |
-| Rel 21-04 | [5001403] | Servicing Stack update  | 4.89 | Apr 13, 2021 |
-| Rel 21-04 OOB | [4578013] | Standalone Security Update  | 4.89 | Aug 19, 2020 |
-| Rel 21-04 | [5001402] | Servicing Stack update  | 5.54 | Apr 13, 2021 |
-| Rel 21-04 | [4592510] | Servicing Stack update  | 2.109 | Dec 8, 2020 |
-| Rel 21-04 | [5001404] | Servicing Stack update  | 6.30 | Apr 13, 2021 |
-| Rel 21-04 | [4494175] | Microcode  | 5.54 | Sep 1, 2020 |
-| Rel 21-04 | [4494174] | Microcode  | 6.30 | Sep 1, 2020 |
+| Rel 21-04 | [5001342] | Latest Cumulative Update(LCU) | [6.30] | Apr 13, 2021 |
+| Rel 21-04 | [4580325] | Flash update | [3.96], [4.89], [5.54], [6.30] | Oct 13, 2020 |
+| Rel 21-04 | [5000800] | IE Cumulative Updates | [2.109], [3.96], [4.89] | Mar 9, 2021 |
+| Rel 21-04 | [5001347] | Latest Cumulative Update(LCU) | [5.54] | Apr 13, 2021 |
+| Rel 21-04 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | [2.109] | Oct 13, 2020 |
+| Rel 21-04 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | [2.109] | Oct 13, 2020 |
+| Rel 21-04 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | [4.89] | Oct 13, 2020 |
+| Rel 21-04 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | [4.89] | Oct 13, 2020 |
+| Rel 21-04 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | [3.96] | Oct 13, 2020 |
+| Rel 21-04 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | [3.96] | Oct 13, 2020 |
+| Rel 21-04 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | [6.30] | Feb 9, 2021 |
+| Rel 21-04 | [5001335] | Monthly Rollup  | [2.109] | Mar 9, 2021 |
+| Rel 21-04 | [5001387] | Monthly Rollup  | [3.96] | Apr 13, 2021 |
+| Rel 21-04 | [5001382] | Monthly Rollup  | [4.89] | Apr 13, 2021 |
+| Rel 21-04 | [5001401] | Servicing Stack update  | [3.96] | Apr 13, 2021 |
+| Rel 21-04 | [5001403] | Servicing Stack update  | [4.89] | Apr 13, 2021 |
+| Rel 21-04 OOB | [4578013] | Standalone Security Update  | [4.89] | Aug 19, 2020 |
+| Rel 21-04 | [5001402] | Servicing Stack update  | [5.54] | Apr 13, 2021 |
+| Rel 21-04 | [4592510] | Servicing Stack update  | [2.109] | Dec 8, 2020 |
+| Rel 21-04 | [5001404] | Servicing Stack update  | [6.30] | Apr 13, 2021 |
+| Rel 21-04 | [4494175] | Microcode  | [5.54] | Sep 1, 2020 |
+| Rel 21-04 | [4494174] | Microcode  | [6.30] | Sep 1, 2020 |
[5001342]: https://support.microsoft.com/kb/5001342 [4580325]: https://support.microsoft.com/kb/4580325
The following tables show the Microsoft Security Response Center (MSRC) updates
[5001404]: https://support.microsoft.com/kb/5001404 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174-
+[2.109]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.96]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.89]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.54]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.30]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## March 2021 Guest OS
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/build-enrollment-app.md
This guide will show you how to get started with the sample Face enrollment appl
When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
-The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android devices; more deployment options are coming in the future.
+The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
## Prerequisites * An Azure subscription ΓÇô [Create one for free](https://azure.microsoft.com/free/cognitive-services/). * Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. * You'll need the key and endpoint from the resource you created to connect your application to Face API.
- * For local development and testing, you can paste the API key and endpoint into the configuration file. For final deployment, store the API key in a secure location and never in the code.
+ * For local development and testing only, the API key and endpoint are environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables.
-> [!IMPORTANT]
-> These subscription keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+### Important Security Considerations
+* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
+* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
+* As a best practice, consider having separate API keys for development and production.
## Set up the development environment
+#### [Android](#tab/android)
+
1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
-1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation </a>. Select **React Native CLI Quickstart** as your development OS and select **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**.
-1. Open the env.json file in your preferred text editor, such as [Visual Studio Code](https://code.visualstudio.com/), and add your endpoint and key. You can get your endpoint and key in the Azure portal under the **Overview** tab of your resource. This step is only for local testing purposes&mdash;don't check in your Face API key to your remote repository.
-1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation </a>.
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select your development OS and **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/).
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+#### [iOS](#tab/ios)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select **macOS** as your development OS and **iOS** as the target OS. Complete the section **Installing dependencies**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/). You will also need to download Xcode.
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
++ ## Create a user add experience
To extend the app's functionality to cover the full experience, read the [overvi
## Deploy the app
-### Android
+#### [Android](#tab/android)
-First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../cognitive-services-security.md?tabs=command-line%2ccsharp).
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp).
When you're ready to release your app for production, you'll generate a release-ready APK file, which is the package file format for Android apps. This APK file must be signed with a private key. With this release build, you can begin distributing the app to your devices directly.
-Follow the <a href="https://developer.android.com/studio/publish/preparing#publishing-build" title="Prepare for release" target="_blank">Prepare for release </a> documentation to learn how to generate a private key, sign your application, and generate a release APK.
+Follow the <a href="https://developer.android.com/studio/publish/preparing#publishing-build" title="Prepare for release" target="_blank">Prepare for release <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn how to generate a private key, sign your application, and generate a release APK.
+
+Once you've created a signed APK, see the <a href="https://developer.android.com/studio/publish" title="Publish your app" target="_blank">Publish your app <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn more about how to release your app.
-Once you've created a signed APK, see the <a href="https://developer.android.com/studio/publish" title="Publish your app" target="_blank">Publish your app </a> documentation to learn more about how to release your app.
+#### [iOS](#tab/ios)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp). To prepare for distribution, you will need to create an app icon, a launch screen, and configure deployment info settings. Follow the [documentation from Xcode](https://developer.apple.com/documentation/Xcode/preparing_your_app_for_distribution) to prepare your app for distribution.
+
+When you're ready to release your app for production, you'll build an archive of your app. Follow the [Xcode documentation](https://developer.apple.com/documentation/Xcode/distributing_your_app_for_beta_testing_and_releases) on how to create an archive build and options for distributing your app.
++ ## Next steps
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
Last updated 01/16/2020 zone_pivot_groups: programming-languages-set-two+ # Configure OpenSSL for Linux
cognitive-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-track-speech-sdk-memory-usage.md
Last updated 12/10/2019
zone_pivot_groups: programming-languages-set-two+ # How to track Speech SDK memory usage
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/long-audio-api.md
# Long Audio API
-The Long Audio API is designed for asynchronous synthesis of long-form text to speech (for example: audio books, news articles and documents). This API doesn't return synthesized audio in real-time, instead the expectation is that you will poll for the response(s) and consume the output(s) as they are made available from the service. Unlike the text to speech API that's used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes, making it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
+The Long Audio API provides asynchronous synthesis of long-form text to speech (for example: audio books, news articles and documents). This API doesn't return synthesized audio in real time. Instead, you poll for the response(s) and consume the output(s) as the service makes them available. Unlike the Text-to-speech API used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes. This makes it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
-Additional benefits of the Long Audio API:
+More benefits of the Long Audio API:
* Synthesized speech returned by the service uses the best neural voices.
-* There's no need to deploy a voice endpoint as it synthesizes voices in none real-time batch mode.
+* There's no need to deploy a voice endpoint.
> [!NOTE]
-> The Long Audio API now supports both [Public Neural Voices](./language-support.md#neural-voices) and [Custom Neural Voices](./how-to-custom-voice.md#custom-neural-voices).
+> The Long Audio API supports both [Public Neural Voices](./language-support.md#neural-voices) and [Custom Neural Voices](./how-to-custom-voice.md#custom-neural-voices).
## Workflow
-Typically, when using the Long Audio API, you'll submit a text file or files to be synthesized, poll for the status, then if the status is successful, you can download the audio output.
+When using the Long Audio API, you'll typically submit a text file or files to be synthesized, poll for the status, and download the audio output when the status indicates success.
This diagram provides a high-level overview of the workflow.
This diagram provides a high-level overview of the workflow.
When preparing your text file, make sure it:
-* Is either plain text (.txt) or SSML text (.txt)
-* Is encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom)
-* Is a single file, not a zip
-* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs
- * For plain text, each paragraph is separated by hitting **Enter/Return** - View [plain text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/en-US.txt)
- * For SSML text, each SSML piece is considered a paragraph. SSML pieces shall be separated by different paragraphs - View [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt)
+* Is either plain text (.txt) or SSML text (.txt).
+* Is encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom).
+* Is a single file, not a zip.
+* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs.
+ * For plain text, each paragraph is separated by hitting **Enter/Return**. See [plain text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/en-US.txt).
+ * For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs. See [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt).
## Sample code
-The remainder of this page will focus on Python, but sample code for the Long Audio API is available on GitHub for the following programming languages:
+
+The rest of this page focuses on Python, but sample code for the Long Audio API is available on GitHub for the following programming languages:
* [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python) * [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/CSharp)
These libraries are used to construct the HTTP request, and call the text-to-spe
To get a list of supported voices, send a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
+This code gets a full list of voices you can use at a specific region/endpoint.
-This code allows you to get a full list of voices for a specific region/endpoint that you can use.
```python def get_voices(): region = '<region>'
Replace the following values:
* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-You'll see an output that looks like this:
+You'll see output that looks like this:
-```console
+```json
{ "values": [ {
If **properties.publicAvailable** is **true**, the voice is a public neural voic
Prepare an input text file, in either plain text or SSML text, then add the following code to `long_audio_synthesis_client.py`: > [!NOTE]
-> `concatenateResult` is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into 1 output by setting the parameter.
-> `outputFormat` is also optional. By default, the audio output is set to riff-16khz-16bit-mono-pcm. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
+> `concatenateResult` is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into one output by including the parameter.
+> `outputFormat` is also optional. By default, the audio output is set to `riff-16khz-16bit-mono-pcm`. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
```python def submit_synthesis():
voice_identities = [
] ```
-You'll see an output that looks like this:
+You'll see output that looks like this:
```console response.status_code: 202
https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/<guid>
``` > [!NOTE]
-> If you have more than 1 input files, you will need to submit multiple requests. There are some limitations that needs to be aware.
-> * The client is allowed to submit up to **5** requests to server per second for each Azure subscription account. If it exceeds the limitation, client will get a 429 error code (too many requests). Please reduce the request amount per second.
-> * The server is allowed to run and queue up to **120** requests for each Azure subscription account. If it exceeds the limitation, server will return a 429 error code(too many requests). Please wait and avoid submitting new request until some requests are completed.
+> If you have more than one input file, you will need to submit multiple requests, and there are limitations to consider.
+> * The client can submit up to **5** requests per second for each Azure subscription account. If it exceeds the limitation, a **429 error code (too many requests)** is returned. Reduce the rate of submissions to avoid this limit.
+> * The server can queue up to **120** requests for each Azure subscription account. If the queue exceeds this limitation, server will return **429 error code(too many requests)**. Wait for completed requests before submitting additional requests.
-The URL in output can be used for getting the request status.
+You can use the URL in output to get the request status.
-### Get information of a submitted request
+### Get details about a submitted request
+
+To get status of a submitted synthesis request, send a GET request to the URL returned in previous step.
-To get status of a submitted synthesis request, simply send a GET request to the URL returned by previous step.
```Python def get_synthesis():
def get_synthesis():
get_synthesis() ```+ Output will be like this:
-```console
+
+```json
response.status_code: 200 { "models": [
response.status_code: 200
} ```
-From `status` property, you can read status of this request. The request will start from `NotStarted` status, then change to `Running`, and finally become `Succeeded` or `Failed`. You can use a loop to poll this API until the status becomes `Succeeded`.
+The `status` property changes from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can poll this API in a loop until the status becomes `Succeeded` or `Failed`.
### Download audio result
-Once a synthesis request succeeds, you can download the audio result by calling GET `/files` API.
+Once a synthesis request succeeds, you can download the audio result by calling the GET `/files` API.
```python def get_files():
def get_files():
get_files() ```+ Replace `<request_id>` with the ID of request you want to download the result. It can be found in the response of previous step. Output will be like this:
-```console
+
+```json
response.status_code: 200 { "values": [
response.status_code: 200
] } ```
-The output contains information of 2 files. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request.
-The result is zip which contains the audio output files generated, along with a copy of the input text.
+This example output contains information for two files. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request.
+
+The result is zip that contains the audio output files generated, along with a copy of the input text.
Both files can be downloaded from the URL in their `links.contentUrl` property. ### Get all synthesis requests
-You can get a list of all submitted requests with following code:
+The following code lists all submitted requests:
```python def get_synthesis():
get_synthesis()
``` Output will be like:
-```console
+
+```json
response.status_code: 200 { "values": [
response.status_code: 200
} ```
-`values` property contains a list of synthesis requests. The list is paginated, with a maximum page size of 100. If there are more than 100 requests, a `"@nextLink"` property will be provided to get the next page of the paginated list.
+The `values` property lists your synthesis requests. The list is paginated, with a maximum page size of 100. If there are more than 100 requests, a `"@nextLink"` property is provided to get the next page of the paginated list.
```console "@nextLink": "https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/?top=100&skip=100"
You can also customize page size and skip number by providing `skip` and `top` i
### Remove previous requests
-The service will keep up to **20,000** requests for each Azure subscription account. If your request amount exceeds this limitation, please remove previous requests before making new ones. If you don't remove existing requests, you'll receive an error notification.
+The service will keep up to **20,000** requests for each Azure subscription account. If your request amount exceeds this limitation, remove previous requests before making new ones. If you don't remove existing requests, you'll receive an error notification.
The following code shows how to remove a specific synthesis request.+ ```python def delete_synthesis(): id = '<request_id>'
The following table details the HTTP response codes and messages from the REST A
|--||-|-| | Create | 400 | The voice synthesis is not enabled in this region. | Change the speech subscription key with a supported region. | | | 400 | Only the **Standard** speech subscription for this region is valid. | Change the speech subscription key to the "Standard" pricing tier. |
-| | 400 | Exceed the 20,000 request limit for the Azure account. Please remove some requests before submitting new ones. | The server will keep up to 20,000 requests for each Azure account. Delete some requests before submitting new ones. |
-| | 400 | This model cannot be used in the voice synthesis : {modelID}. | Make sure the {modelID}'s state is correct. |
-| | 400 | The region for the request does not match the region for the model : {modelID}. | Make sure the {modelID}'s region match with the request's region. |
+| | 400 | Exceed the 20,000 request limit for the Azure account. Remove some requests before submitting new ones. | The server will keep up to 20,000 requests for each Azure account. Delete some requests before submitting new ones. |
+| | 400 | This model cannot be used in the voice synthesis: {modelID}. | Make sure the {modelID}'s state is correct. |
+| | 400 | The region for the request does not match the region for the model: {modelID}. | Make sure the {modelID}'s region match with the request's region. |
| | 400 | The voice synthesis only supports the text file in the UTF-8 encoding with the byte-order marker. | Make sure the input files are in UTF-8 encoding with the byte-order marker. | | | 400 | Only valid SSML inputs are allowed in the voice synthesis request. | Make sure the input SSML expressions are correct. | | | 400 | The voice name {voiceName} is not found in the input file. | The input SSML voice name is not aligned with the model ID. | | | 400 | The number of paragraphs in the input file should be less than 10,000. | Make sure the number of paragraphs in the file is less than 10,000. | | | 400 | The input file should be more than 400 characters. | Make sure your input file exceeds 400 characters. |
-| | 404 | The model declared in the voice synthesis definition cannot be found : {modelID}. | Make sure the {modelID} is correct. |
-| | 429 | Exceed the active voice synthesis limit. Please wait until some requests finish. | The server is allowed to run and queue up to 120 requests for each Azure account. Please wait and avoid submitting new requests until some requests are completed. |
-| All | 429 | There are too many requests. | The client is allowed to submit up to 5 requests to server per second for each Azure account. Please reduce the request amount per second. |
-| Delete | 400 | The voice synthesis task is still in use. | You can only delete requests that is **Completed** or **Failed**. |
+| | 404 | The model declared in the voice synthesis definition cannot be found: {modelID}. | Make sure the {modelID} is correct. |
+| | 429 | Exceed the active voice synthesis limit. Wait until some requests finish. | The server is allowed to run and queue up to 120 requests for each Azure account. Wait and avoid submitting new requests until some requests are completed. |
+| All | 429 | There are too many requests. | The client is allowed to submit up to 5 requests to server per second for each Azure account. Reduce the request amount per second. |
+| Delete | 400 | The voice synthesis task is still in use. | You can only delete requests that are **Completed** or **Failed**. |
| GetByID | 404 | The specified entity cannot be found. | Make sure the synthesis ID is correct. | ## Regions and endpoints
The Long audio API is available in multiple regions with unique endpoints.
## Audio output formats
-We support flexible audio output formats. You can generate audio outputs per paragraph or concatenate the audio outputs into a single output by setting the 'concatenateResult' parameter. The following audio output formats are supported by the Long Audio API:
+We support flexible audio output formats. You can generate audio outputs per paragraph or concatenate the audio outputs into a single output by setting the `concatenateResult` parameter. The following audio output formats are supported by the Long Audio API:
> [!NOTE] > The default audio format is riff-16khz-16bit-mono-pcm.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
To add a Speech service resource (free or paid tier) to your Azure account:
1. In the **New** window, type "speech" in the search box and press ENTER. 1. In the search results, select **Speech**.-
- ![speech search results](media/index/speech-search.png)
+
+ :::image type="content" source="media/index/speech-search.png" alt-text="Create Speech resource in Azure portal.":::
1. Select **Create**, then:
cognitive-services Concept Business Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-business-cards.md
Previously updated : 03/15/2021 Last updated : 04/30/2021
-# Form Recognizer prebuilt business cards model
+# Form Recognizer prebuilt business cards model
-Azure Form Recognizer can analyze and extract contact information from business cards using its prebuilt business cards model. It combines powerful Optical Character Recognition (OCR) capabilities with our business card understanding model to extract key information from business cards in English. It extracts personal contact info, company name, job title, and more. The Prebuilt Business Card API is publicly available in the Form Recognizer v2.1 preview.
+Azure Form Recognizer can analyze and extract contact information from business cards using its prebuilt business cards model. It combines powerful Optical Character Recognition (OCR) capabilities with our business card understanding model to extract key information from business cards in English. It extracts personal contact info, company name, job title, and more. The Prebuilt Business Card API is publicly available in the Form Recognizer v2.1 preview.
## What does the Business Card service do?
The prebuilt Business Card API extracts key fields from business cards and retur
![Contoso itemized image from FOTT + JSON output](./media/business-card-example.jpg) -- ### Fields extracted:
-|Name| Type | Description | Text |
+|Name| Type | Description | Text |
|:--|:-|:-|:-| | ContactNames | array of objects | Contact name extracted from business card | [{ "FirstName": "John", "LastName": "Doe" }] |
-| FirstName | string | First (given) name of contact | "John" |
-| LastName | string | Last (family) name of contact | "Doe" |
-| CompanyNames | array of strings | Company name extracted from business card | ["Contoso"] |
-| Departments | array of strings | Department or organization of contact | ["R&D"] |
-| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] |
-| Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] |
-| Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
-| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
+| FirstName | string | First (given) name of contact | "John" |
+| LastName | string | Last (family) name of contact | "Doe" |
+| CompanyNames | array of strings | Company name extracted from business card | ["Contoso"] |
+| Departments | array of strings | Department or organization of contact | ["R&D"] |
+| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] |
+| Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] |
+| Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
+| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
| MobilePhones | array of phone numbers | Mobile phone number extracted from business card | ["+19876543210"] | | Faxes | array of phone numbers | Fax phone number extracted from business card | ["+19876543211"] | | WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+19876543231"] | | OtherPhones | array of phone numbers | Other phone number extracted from business card | ["+19876543233"] |
-The Business Card API can also return all recognized text from the Business Card. This OCR output is included in the JSON response.
+The Business Card API can also return all recognized text from the Business Card. This OCR output is included in the JSON response.
-### Input Requirements
+### Input Requirements
[!INCLUDE [input requirements](./includes/input-requirements-receipts.md)]
+## Supported locales
+
+**Pre-built business cards v2.1-preview.3** (Public Preview) supports the following locales:
+
+* **en-us**
+* **en-au**
+* **en-ca**
+* **en-gb**
+* **en-in**
+ ## The Analyze Business Card operation The [Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeBusinessCardAsync) takes an image or PDF of a business card as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
When the **status** field has the **succeeded** value, the JSON response will in
The response to the Get Analyze Business Card Result operation will be the structured representation of the business card with all the information extracted. See here for a [sample business card file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/business-card-english.jpg) and its structured output [sample business card output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/business-card-result.json). See the following example of a successful JSON response:
-* The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words.
+* The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words.
* The `"documentResults"` node contains the business-card-specific values that the model discovered. This is where you'll find useful contact information like the first name, last name, company name and more. ```json
See the following example of a successful JSON response:
"width": 4032, "height": 3024, "unit": "pixel",
- "lines":
+ "lines":
{ "text": "Dr. Avery Smith", "boundingBox": [
See the following example of a successful JSON response:
"boundingBox": [ 419, ]
-
+ } ], "documentResults": [
See the following example of a successful JSON response:
Follow the [quickstart](./QuickStarts/client-library.md) quickstart to implement business card data extraction using Python and the REST API.
-## Customer Scenarios
+## Customer Scenarios
The data extracted with the Business Card API can be used to perform various tasks. Extracting this contact info automatically saves time for users in client-facing roles. The following are a few examples of what our customers have accomplished with the Business Card API:
-* Extract contact info from Business cards and quickly create phone contacts.
-* Integrate with CRM to automatically create contact using business card images.
-* Keep track of sales leads.
-* Extract contact info in bulk from existing business card images.
+* Extract contact info from Business cards and quickly create phone contacts.
+* Integrate with CRM to automatically create contact using business card images.
+* Keep track of sales leads.
+* Extract contact info in bulk from existing business card images.
The Business Card API also powers the [AI Builder Business Card Processing feature](/ai-builder/prebuilt-business-card).
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
Previously updated : 04/14/2021 Last updated : 04/30/2021
To try out the Form Recognizer IDs service, go to the online Sample UI Tool:
[!INCLUDE [input requirements](./includes/input-requirements-receipts.md)]
-## Supported ID types
+## Supported locales
+
+ **Pre-built ID v2.1-preview.3** (preview) supports identity documents in the **en-us** locale.
+
+## Supported Identity document types
* **Pre-built IDs v2.1-preview.3** Extracts key values from worldwide passports, and U.S. Driver's Licenses.
To try out the Form Recognizer IDs service, go to the online Sample UI Tool:
> > Currently supported ID types include worldwide passport and U.S. Driver's Licenses. We are actively seeking to expand our ID support to other identity documents around the world.
-## POST Analyze Id Document
+## POST Analyze ID Document
The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5f74a7daad1f2612c46f5822) operation takes an image or PDF of an ID as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
cognitive-services Concept Invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-invoices.md
Previously updated : 03/15/2021 Last updated : 04/30/2021 # Form Recognizer prebuilt invoice model
-Azure Form Recognizer can analyze and extract information from sales invoices using its prebuilt invoice models. The Invoice API enables customers to take invoices in a variety of formats and return structured data to automate the invoice processing. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/overview-ocr.md) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. The prebuilt Invoice API is publicly available in the Form Recognizer v2.1 preview.
+Azure Form Recognizer can analyze and extract information from sales invoices using its prebuilt invoice models. The Invoice API enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/overview-ocr.md) capabilities with invoice understanding deep learning models to extract key information from invoices written in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. The prebuilt Invoice API is publicly available in the Form Recognizer v2.1 preview.
## What does the Invoice service do?
-The Invoice API extracts key fields and line items from invoices and returns them in an organized structured JSON response. Invoices can be from a variety of formats and quality, including phone-captured images, scanned documents, and digital PDFs. The invoice API will extract the structured output from all of these invoices.
+The Invoice API extracts key fields and line items from invoices and returns them in an organized structured JSON response. Invoices can be from various formats and quality, including phone-captured images, scanned documents, and digital PDFs. The invoice API will extract the structured output from all of these invoices.
![Contoso invoice example](./media/invoice-example-new.jpg)
To try out the Form Recognizer Invoice Service, go to the online Sample UI Tool:
> [!div class="nextstepaction"] > [Try Prebuilt Models](https://fott-preview.azurewebsites.net/)
-You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Invoice service.
+You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Invoice service.
:::image type="content" source="media/analyze-invoice-new.png" alt-text="Analyzed invoice example" lightbox="media/analyze-invoice-new.png":::
You will need an Azure subscription ([create one for free](https://azure.microso
[!INCLUDE [input requirements](./includes/input-requirements-receipts.md)]
-## Supported locales
-
-**Pre-built Receipt v2.0** (GA) and **Pre-built Receipt v2.1-preview.3** (preview) support invoices in the EN-US locale.
+## Supported locales
+**Pre-built invoice v2.1-preview.3** (preview) supports invoices in the **en-us** locale.
## The Analyze Invoice operation
The second step is to call the [Get Analyze Invoice Result](https://westcentralu
|:--|:-:|:-| |status | string | notStarted: The analysis operation has not started.<br /><br />running: The analysis operation is in progress.<br /><br />failed: The analysis operation has failed.<br /><br />succeeded: The analysis operation has succeeded.|
-When the **status** field has the **succeeded** value, the JSON response will include the invoice understanding results, tables extracted and optional text recognition results, if requested. The invoice understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence and corresponding word elements. It also includes the line items extracted where each line item contains the amount, description, unitPrice, quantity etc. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
+When the **status** field has the **succeeded** value, the JSON response will include the invoice understanding results, tables extracted and optional text recognition results, if requested. The invoice understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence, and corresponding word elements. It also includes the line items extracted where each line item contains the amount, description, unitPrice, quantity etc. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
### Sample JSON output
-The response to the Get Analyze Invoice Result operation will be the structured representation of the invoice with all the information extracted.
+The response to the Get Analyze Invoice Result operation will be the structured representation of the invoice with all the information extracted.
See here for a [sample invoice file](media/sample-invoice.jpg) and its structured output [sample invoice output](media/invoice-example-new.jpg).
-The JSON output has 3 parts:
-* `"readResults"` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words.
-* `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence and a reference to the lines and words in "readResults".
-* `"documentResults"` node contains the invoice specific values and line items that the model discovered. This is where you'll find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more.
+The JSON output has three parts:
+* `"readResults"` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words.
+* `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".
+* `"documentResults"` node contains the invoice-specific values and line items that the model discovered. It is where you'll find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more.
## Example output
-The Invoice service will extract the text, tables and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the output below uses this [sample invoice](media/sample-invoice.jpg)).
+The Invoice service will extract the text, tables, and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the output below uses this [sample invoice](media/sample-invoice.jpg)).
|Name| Type | Description | Text | Value (standardized output) | |:--|:-|:-|:-| :-| | CustomerName | string | Customer being invoiced | Microsoft Corp | | | CustomerId | string | Reference ID for the customer | CID-12345 | |
-| PurchaseOrder | string | A purchase order reference number | PO-3333 | |
-| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | |
-| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
+| PurchaseOrder | string | A purchase order reference number | PO-3333 | |
+| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | |
+| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
| DueDate | date | Date payment for this invoice is due | 12/15/2019 | 2019-12-15 | | VendorName | string | Vendor who has created this invoice | CONTOSO LTD. | | | VendorAddress | string | Mailing address for the Vendor | 123 456th St New York, NY, 10001 | |
The Invoice service will extract the text, tables and 26 invoice fields. Followi
| BillingAddressRecipient | string | Name associated with the BillingAddress | Microsoft Services | | | ShippingAddress | string | Explicit shipping address for the customer | 123 Ship St, Redmond WA, 98052 | | | ShippingAddressRecipient | string | Name associated with the ShippingAddress | Microsoft Delivery | |
-| SubTotal | number | Subtotal field identified on this invoice | $100.00 | 100 |
+| SubTotal | number | Subtotal field identified on this invoice | $100.00 | 100 |
| TotalTax | number | Total tax field identified on this invoice | $10.00 | 10 | | InvoiceTotal | number | Total new charges associated with this invoice | $110.00 | 110 | | AmountDue | number | Total Amount Due to the vendor | $610.00 | 610 |
The Invoice service will extract the text, tables and 26 invoice fields. Followi
| ServiceEndDate | date | End date for the service period (for example, a utility bill service period) | 11/14/2019 | 2019-11-14 | | PreviousUnpaidBalance | number | Explicit previously unpaid balance | $500.00 | 500 |
-Following are the line items extracted from an invoice in the JSON output response (the output below uses this [sample invoice](./media/sample-invoice.jpg))
+Following are the line items extracted from an invoice in the JSON output response (the output below uses this [sample invoice](./media/sample-invoice.jpg))
|Name| Type | Description | Text (line item #1) | Value (standardized output) | |:--|:-|:-|:-| :-|
Following are the line items extracted from an invoice in the JSON output respon
| Quantity | number | The quantity for this invoice line item | 2 | 2 | | UnitPrice | number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 | | ProductCode | string| Product code, product number, or SKU associated with the specific line item | A123 | |
-| Unit | string| The unit of the line item e.g kg, lb etc. | hours | |
-| Date | date| Date corresponding to each line item. Often this is a date the line item was shipped | 3/4/2021| 2021-03-04 |
+| Unit | string| The unit of the line item, e.g, kg, lb etc. | hours | |
+| Date | date| Date corresponding to each line item. Often it is a date the line item was shipped | 3/4/2021| 2021-03-04 |
| Tax | number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
cognitive-services Concept Receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-receipts.md
Previously updated : 03/15/2021 Last updated : 04/30/2021
Azure Form Recognizer can analyze and extract information from sales receipts us
## Understanding Receipts
-Many businesses and individuals still rely on manually extracted data from sales receipts. Automatically extracting data from these receipts can be complicated. Receipts may be crumpled, hard to read, have handwritten parts and contain low-quality smartphone images. Also, receipt templates and fields can vary greatly by market, region, and merchant. These data extraction and field detection challenges make receipt processing a unique problem.
+Many businesses and individuals still rely on manually extracted data from sales receipts. Automatically extracting data from these receipts can be complicated. Receipts may be crumpled, hard to read, have handwritten parts and contain low-quality smartphone images. Also, receipt templates and fields can vary greatly by market, region, and merchant. These data extraction and field detection challenges make receipt processing a unique problem.
The Receipt API uses Optical Character Recognition (OCR) and our prebuilt model to enable vast receipt processing scenarios. With Receipt API there is no need to train a model. Send the receipt image to the Analyze Receipt API and the data is extracted. ![sample receipt](./media/receipts-example.jpg)
-## What does the Receipt service do?
+## What does the Receipt service do?
The prebuilt Receipt service extracts the contents of sales receipts&mdash;the type of receipt you would commonly get at a restaurant, retailer, or grocery store.
To try out the Form Recognizer receipt service, go to the online Sample UI Tool:
[!INCLUDE [input requirements](./includes/input-requirements-receipts.md)]
-## Supported locales
+## Supported locales
-* **Pre-built Receipt v2.0** (GA) supports sales receipts in the EN-US locale
-* **Pre-built Receipt v2.1-preview.3** (Public Preview) adds additional support for the following EN receipt locales:
- * EN-AU
- * EN-CA
- * EN-GB
- * EN-IN
+* **Pre-built receipt v2.0** (GA) supports sales receipts in the **en-us** locale
+* **Pre-built receipt v2.1-preview.3** (Public Preview) adds additional support for the following English receipt locales:
+
+* **en-au**
+* **en-ca**
+* **en-gb**
+* **en-in**
> [!NOTE]
- > Language input
+ > Language input
> > Prebuilt Receipt v2.1-preview.3 has an optional request parameter to specify a receipt locale from additional English markets. For sales receipts in English from Australia (EN-AU), Canada (EN-CA), Great Britain (EN-GB), and India (EN-IN), you can specify the locale to get improved results. If no locale is specified in v2.1-preview.3, the model will default to the EN-US model.
When the **status** field has the **succeeded** value, the JSON response will in
The response to the Get Analyze Receipt Result operation will be the structured representation of the receipt with all the information extracted. See here for a [sample receipt file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/contoso-allinone.jpg) and its structured output [sample receipt output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/receipt-result.json). See the following example of a successful JSON response:
-* The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words.
+* The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words.
* The `"documentResults"` node contains the business-card-specific values that the model discovered. This is where you'll find useful key/value pairs like the first name, last name, company name and more. ```json
-{
+{
"status":"succeeded", "createdDateTime":"2019-12-17T04:11:24Z", "lastUpdatedDateTime":"2019-12-17T04:11:32Z",
- "analyzeResult":{
+ "analyzeResult":{
"version":"2.0.0",
- "readResults":[
- {
+ "readResults":[
+ {
"page":1, "angle":0.6893, "width":1688, "height":3000, "unit":"pixel", "language":"en",
- "lines":[
- {
+ "lines":[
+ {
"text":"Contoso",
- "boundingBox":[
+ "boundingBox":[
635, 510, 1086,
See the following example of a successful JSON response:
643, 604 ],
- "words":[
- {
+ "words":[
+ {
"text":"Contoso",
- "boundingBox":[
+ "boundingBox":[
639, 510, 1087,
See the following example of a successful JSON response:
] } ],
- "documentResults":[
- {
+ "documentResults":[
+ {
"docType":"prebuilt:receipt",
- "pageRange":[
+ "pageRange":[
1, 1 ],
- "fields":{
- "ReceiptType":{
+ "fields":{
+ "ReceiptType":{
"type":"string", "valueString":"Itemized", "confidence":0.692 },
- "MerchantName":{
+ "MerchantName":{
"type":"string", "valueString":"Contoso Contoso", "text":"Contoso Contoso",
- "boundingBox":[
+ "boundingBox":[
378.2, 292.4, 1117.7,
See the following example of a successful JSON response:
], "page":1, "confidence":0.613,
- "elements":[
+ "elements":[
"#/readResults/0/lines/0/words/0", "#/readResults/0/lines/1/words/0" ] },
- "MerchantAddress":{
+ "MerchantAddress":{
"type":"string", "valueString":"123 Main Street Redmond, WA 98052", "text":"123 Main Street Redmond, WA 98052",
- "boundingBox":[
+ "boundingBox":[
302, 675.8, 848.1,
See the following example of a successful JSON response:
], "page":1, "confidence":0.99,
- "elements":[
+ "elements":[
"#/readResults/0/lines/2/words/0", "#/readResults/0/lines/2/words/1", "#/readResults/0/lines/2/words/2",
See the following example of a successful JSON response:
"#/readResults/0/lines/3/words/2" ] },
- "MerchantPhoneNumber":{
+ "MerchantPhoneNumber":{
"type":"phoneNumber", "valuePhoneNumber":"+19876543210", "text":"987-654-3210",
- "boundingBox":[
+ "boundingBox":[
278, 1004, 656.3,
See the following example of a successful JSON response:
], "page":1, "confidence":0.99,
- "elements":[
+ "elements":[
"#/readResults/0/lines/4/words/0" ] },
- "TransactionDate":{
+ "TransactionDate":{
"type":"date", "valueDate":"2019-06-10", "text":"6/10/2019",
- "boundingBox":[
+ "boundingBox":[
265.1, 1228.4, 525,
See the following example of a successful JSON response:
], "page":1, "confidence":0.99,
- "elements":[
+ "elements":[
"#/readResults/0/lines/5/words/0" ] },
- "TransactionTime":{
+ "TransactionTime":{
"type":"time", "valueTime":"13:59:00", "text":"13:59",
- "boundingBox":[
+ "boundingBox":[
541, 1248, 677.3,
See the following example of a successful JSON response:
], "page":1, "confidence":0.977,
- "elements":[
+ "elements":[
"#/readResults/0/lines/5/words/1" ] },
- "Items":{
+ "Items":{
"type":"array",
- "valueArray":[
- {
+ "valueArray":[
+ {
"type":"object",
- "valueObject":{
- "Quantity":{
+ "valueObject":{
+ "Quantity":{
"type":"number", "text":"1",
- "boundingBox":[
+ "boundingBox":[
245.1, 1581.5, 300.9,
See the following example of a successful JSON response:
], "page":1, "confidence":0.92,
- "elements":[
+ "elements":[
"#/readResults/0/lines/7/words/0" ] },
- "Name":{
+ "Name":{
"type":"string", "valueString":"Cappuccino", "text":"Cappuccino",
- "boundingBox":[
+ "boundingBox":[
322, 1586, 654.2,
See the following example of a successful JSON response:
], "page":1, "confidence":0.923,
- "elements":[
+ "elements":[
"#/readResults/0/lines/7/words/1" ] },
- "TotalPrice":{
+ "TotalPrice":{
"type":"number", "valueNumber":2.2, "text":"$2.20",
- "boundingBox":[
+ "boundingBox":[
1107.7, 1584, 1263,
See the following example of a successful JSON response:
], "page":1, "confidence":0.918,
- "elements":[
+ "elements":[
"#/readResults/0/lines/8/words/0" ] }
See the following example of a successful JSON response:
... ] },
- "Subtotal":{
+ "Subtotal":{
"type":"number", "valueNumber":11.7, "text":"11.70",
- "boundingBox":[
+ "boundingBox":[
1146, 2221, 1297.3,
See the following example of a successful JSON response:
], "page":1, "confidence":0.955,
- "elements":[
+ "elements":[
"#/readResults/0/lines/13/words/1" ] },
- "Tax":{
+ "Tax":{
"type":"number", "valueNumber":1.17, "text":"1.17",
- "boundingBox":[
+ "boundingBox":[
1190, 2359, 1304,
See the following example of a successful JSON response:
], "page":1, "confidence":0.979,
- "elements":[
+ "elements":[
"#/readResults/0/lines/15/words/1" ] },
- "Tip":{
+ "Tip":{
"type":"number", "valueNumber":1.63, "text":"1.63",
- "boundingBox":[
+ "boundingBox":[
1094, 2479, 1267.7,
See the following example of a successful JSON response:
], "page":1, "confidence":0.941,
- "elements":[
+ "elements":[
"#/readResults/0/lines/17/words/1" ] },
- "Total":{
+ "Total":{
"type":"number", "valueNumber":14.5, "text":"$14.50",
- "boundingBox":[
+ "boundingBox":[
1034.2, 2617, 1387.5,
See the following example of a successful JSON response:
], "page":1, "confidence":0.985,
- "elements":[
+ "elements":[
"#/readResults/0/lines/19/words/0" ] }
See the following example of a successful JSON response:
} ```
-## Customer scenarios
+## Customer scenarios
The data extracted with the Receipt API can be used to perform a variety of tasks. Below are a few examples of what customers have accomplished with the Receipt API.
-### Business expense reporting
+### Business expense reporting
-Often filing business expenses involves spending time manually entering data from images of receipts. With the Receipt API, you can use the extracted fields to partially automate this process and analyze your receipts quickly.
+Often filing business expenses involves spending time manually entering data from images of receipts. With the Receipt API, you can use the extracted fields to partially automate this process and analyze your receipts quickly.
-The Receipt API is a simple JSON output allowing you to use the extracted field values in multiple ways. Integrate with internal expense applications to pre-populate expense reports. For more on this scenario, read about how Acumatica is utilizing Receipt API to [make expense reporting a less painful process](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure).
+The Receipt API is a simple JSON output allowing you to use the extracted field values in multiple ways. Integrate with internal expense applications to pre-populate expense reports. For more on this scenario, read about how Acumatica is utilizing Receipt API to [make expense reporting a less painful process](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure).
### Auditing and accounting
-The Receipt API output can also be used to perform analysis on a large number of expenses at various points in the expense reporting and reimbursement process. You can process receipts to triage them for manual audit or quick approvals.
+The Receipt API output can also be used to perform analysis on a large number of expenses at various points in the expense reporting and reimbursement process. You can process receipts to triage them for manual audit or quick approvals.
The Receipt output is also useful for general book-keeping for business or personal use. Use the Receipt API to transform any raw receipt image/PDF data into a digital output that is actionable.
-### Consumer behavior
+### Consumer behavior
Receipts contain useful data which you can use to analyze consumer behavior and shopping trends.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/language-support.md
Previously updated : 03/15/2021 Last updated : 04/30/2021
This table lists the human languages supported by the Form Recognizer service.
|Danish | `da` | Γ£ö | | |Dutch | `nl` |Γ£ö | Γ£ö | |English (printed and handwritten) | `en` |Γ£ö | Γ£ö |
-|Estonian |`crh`| Γ£ö | |
+|Estonian |`et`| Γ£ö | |
|Fijian |`fj`| Γ£ö | | |Filipino |`fil`| Γ£ö | | |Finnish | `fi` | Γ£ö | |
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
If access to devices are granted, after some time, device permissions are reset.
<br/>Operating System: iOS ### Sometimes it takes a long time to render remote participant videos
-During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](./voice-video-calling/network-requirements.md) documentation for network configuration guidance.
+During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](./voice-video-calling/network-requirements.md) documentation for network configuration guidance.
+
+### Using 3rd party libraries to access GUM during the call may result in audio loss
+Using getUserMedia separately inside the application will result in losing audio stream since a third party library takes over device access from ACS library.
+Developers are encouraged to do the following:
+1. Don't use 3rd party libraries that are using internally GetUserMedia API during the call.
+2. If you still need to use 3rd party library, only way to recover is to either change the selected device (if the user has more than one) or restart the call.
+
+<br/>Browsers: Safari
+<br/>Operating System: iOS
+
+#### Possible causes
+In some browsers (Safari for example), acquiring your own stream from the same device will have a side-effect of running into race conditions. Acquiring streams from other devices may lead the user into insufficient USB/IO bandwidth, and sourceUnavailableError rate will skyrocket.
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Key features of the Calling SDK:
The following list presents the set of features which are currently available in the Azure Communication Services Calling SDKs.
-| Group of features | Capability | JS | Java (Android) | Objective-C (iOS)
-| -- | - | | -- | -
-| Core Capabilities | Place a one-to-one call between two users | ✔️ | ✔️ | ✔️
-| | Place a group call with more than two users (up to 350 users) | ✔️ | ✔️ | ✔️
-| | Promote a one-to-one call with two users into a group call with more than two users | ✔️ | ✔️ | ✔️
-| | Join a group call after it has started | ✔️ | ✔️ | ✔️
-| | Invite another VoIP participant to join an ongoing group call | ✔️ | ✔️ | ✔️
-| Mid call control | Turn your video on/off | ✔️ | ✔️ | ✔️
-| | Mute/Unmute mic | ✔️ | ✔️ | ✔️
-| | Switch between cameras | ✔️ | ✔️ | ✔️
-| | Local hold/un-hold | ✔️ | ✔️ | ✔️
-| | Active speaker | ✔️ | ✔️ | ✔️
-| | Choose speaker for calls | ✔️ | ✔️ | ✔️
-| | Choose microphone for calls | ✔️ | ✔️ | ✔️
-| | Show state of a participant<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | ✔️ | ✔️
-| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️
-| | Show if a participant is muted | ✔️ | ✔️ | ✔️
-| | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️
-| Screen sharing | Share the entire screen from within the application | ✔️ | ❌ | ❌
-| | Share a specific application (from the list of running applications) | ✔️ | ❌ | ❌
-| | Share a web browser tab from the list of open tabs | ✔️ | ❌ | ❌
-| | Participant can view remote screen share | ✔️ | ✔️ | ✔️
-| Roster | List participants | ✔️ | ✔️ | ✔️
-| | Remove a participant | ✔️ | ✔️ | ✔️
-| PSTN | Place a one-to-one call with a PSTN participant | ✔️ | ✔️ | ✔️
-| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️
-| | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️
-| | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️
-| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️
-| Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️
-| | Get camera list | ✔️ | ✔️ | ✔️
-| | Set camera | ✔️ | ✔️ | ✔️
-| | Get selected camera | ✔️ | ✔️ | ✔️
-| | Get microphone list | ✔️ | ❌ |❌
-| | Set microphone | ✔️ | ❌ | ❌
-| | Get selected microphone | ✔️ | ❌ | ❌
-| | Get speakers list | ✔️ | ❌ | ❌
-| | Set speaker | ✔️ | ❌ | ❌
-| | Get selected speaker | ✔️ | ❌ | ❌
-| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️
-| | Set / update scaling mode | ✔️ | ✔️ | ✔️
-| | Render remote video stream | ✔️ | ✔️ | ✔️
+| Group of features | Capability | JS | Java (Android) | Objective-C (iOS) |
+| -- | - | | -- | -- |
+| Core Capabilities | Place a one-to-one call between two users | ✔️ | ✔️ | ✔️ |
+| | Place a group call with more than two users (up to 350 users) | ✔️ | ✔️ | ✔️ |
+| | Promote a one-to-one call with two users into a group call with more than two users | ✔️ | ✔️ | ✔️ |
+| | Join a group call after it has started | ✔️ | ✔️ | ✔️ |
+| | Invite another VoIP participant to join an ongoing group call | ✔️ | ✔️ | ✔️ |
+| Mid call control | Turn your video on/off | ✔️ | ✔️ | ✔️ |
+| | Mute/Unmute mic | ✔️ | ✔️ | ✔️ |
+| | Switch between cameras | ✔️ | ✔️ | ✔️ |
+| | Local hold/un-hold | ✔️ | ✔️ | ✔️ |
+| | Active speaker | ✔️ | ✔️ | ✔️ |
+| | Choose speaker for calls | ✔️ | ✔️ | ✔️ |
+| | Choose microphone for calls | ✔️ | ✔️ | ✔️ |
+| | Show state of a participant<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | ✔️ | ✔️ |
+| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️ |
+| | Show if a participant is muted | ✔️ | ✔️ | ✔️ |
+| | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ |
+| Screen sharing | Share the entire screen from within the application | ✔️ | ❌ | ❌ |
+| | Share a specific application (from the list of running applications) | ✔️ | ❌ | ❌ |
+| | Share a web browser tab from the list of open tabs | ✔️ | ❌ | ❌ |
+| | Participant can view remote screen share | ✔️ | ✔️ | ✔️ |
+| Roster | List participants | ✔️ | ✔️ | ✔️ |
+| | Remove a participant | ✔️ | ✔️ | ✔️ |
+| PSTN | Place a one-to-one call with a PSTN participant | ✔️ | ✔️ | ✔️ |
+| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ |
+| | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ |
+| | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ |
+| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ |
+| Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ |
+| | Get camera list | ✔️ | ✔️ | ✔️ |
+| | Set camera | ✔️ | ✔️ | ✔️ |
+| | Get selected camera | ✔️ | ✔️ | ✔️ |
+| | Get microphone list | ✔️ | ❌ | ❌ |
+| | Set microphone | ✔️ | ❌ | ❌ |
+| | Get selected microphone | ✔️ | ❌ | ❌ |
+| | Get speakers list | ✔️ | ❌ | ❌ |
+| | Set speaker | ✔️ | ❌ | ❌ |
+| | Get selected speaker | ✔️ | ❌ | ❌ |
+| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ |
+| | Set / update scaling mode | ✔️ | ✔️ | ✔️ |
+| | Render remote video stream | ✔️ | ✔️ | ✔️ |
## Calling SDK streaming support The Communication Services Calling SDK supports the following streaming configurations:
-| Limit |Web | Android/iOS|
-|--|-||
-|**# of outgoing streams that can be sent simultaneously** |1 video or 1 screen sharing | 1 video + 1 screen sharing|
-|**# of incoming streams that can be rendered simultaneously** |1 video or 1 screen sharing| 6 video + 1 screen sharing |
+| Limit | Web | Android/iOS |
+| - | | -- |
+| **# of outgoing streams that can be sent simultaneously** | 1 video or 1 screen sharing | 1 video + 1 screen sharing |
+| **# of incoming streams that can be rendered simultaneously** | 1 video or 1 screen sharing | 6 video + 1 screen sharing |
## Calling SDK timeouts The following timeouts apply to the Communication Services Calling SDKs:
-| Action | Timeout in seconds |
-| -- | - |
-| Reconnect/removal participant | 120 |
-| Add or remove new modality from a call (Start/stop video or screen sharing) | 40 |
-| Call Transfer operation timeout | 60 |
-| 1:1 call establishment timeout | 85 |
-| Group call establishment timeout | 85 |
-| PSTN call establishment timeout | 115 |
-| Promote 1:1 call to a group call timeout | 115 |
+| Action | Timeout in seconds |
+| | |
+| Reconnect/removal participant | 120 |
+| Add or remove new modality from a call (Start/stop video or screen sharing) | 40 |
+| Call Transfer operation timeout | 60 |
+| 1:1 call establishment timeout | 85 |
+| Group call establishment timeout | 85 |
+| PSTN call establishment timeout | 115 |
+| Promote 1:1 call to a group call timeout | 115 |
## JavaScript Calling SDK support by OS and browser The following table represents the set of supported browsers which are currently available. We support the most recent three versions of the browser unless otherwise indicated.
-| Platform | Chrome | Safari* | Edge (Chromium) |
-| -- | -| | -- |
-| Android | ✔️ | ❌ | ❌ |
-| iOS | ❌ | ✔️**** | ❌ |
-| macOS*** | ✔️ | ✔️** | ❌ |
-| Windows*** | ✔️ | ❌ | ✔️ |
-| Ubuntu/Linux | ✔️ | ❌ | ❌ |
-
-*Safari versions 13.1+ are supported, 1:1 calls are not supported on Safari.
-
-**Safari 14+/macOS 11+ needed for outgoing video support.
-
-***Outgoing screen sharing is supported only on desktop platforms (Windows, macOS, and Linux), regardless of the browser version, and is not supported on any mobile platform (Android, iOS, iPad, and tablets).
-
-****An iOS app on Safari can't enumerate/select mic and speaker devices (for example, Bluetooth); this is a limitation of the OS, and there's always only one device.
+| Platform | Chrome | Safari | Edge (Chromium) | Notes |
+| | | | | -- |
+| Android | ✔️ | ❌ | ❌ | Outgoing Screen Sharing is not supported. |
+| iOS | ❌ | ✔️ | ❌ | An iOS app on Safari can't enumerate/select mic and speaker devices (for example, Bluetooth); this is a limitation of the OS, and there's always only one device. Outgoing screen sharing is not supported. |
+| macOS | ✔️ | ✔️ | ❌ | Safari 14+/macOS 11+ needed for outgoing video support. |
+| Windows | ✔️ | ❌ | ✔️ | |
+| Ubuntu/Linux | ✔️ | ❌ | ❌ | |
+* For Safari versions 13.1+ are supported, 1:1 calls are not supported on Safari.
+* Unless otherwise specified, the past 3 versions of each browser are supported.
## Calling client - browser security model
container-registry Container Registry Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-faq.md
For registry troubleshooting guidance, see:
### Can I create an Azure Container Registry using a Resource Manager template?
-Yes. Here is [a template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-container-registry) that you can use to create a registry.
+Yes. Here is [a template](https://azure.microsoft.com/resources/templates/101-container-registry/) that you can use to create a registry.
### Is there security vulnerability scanning for images in ACR?
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-geo-replication.md
Azure Container Registry also supports [availability zones](zone-redundancy.md)
## Configure geo-replication
-Configuring geo-replication is as easy as clicking regions on a map. You can also manage geo-replication using tools including the [az acr replication](/cli/azure/acr/replication) commands in the Azure CLI, or deploy a registry enabled for geo-replication with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-container-registry-geo-replication).
+Configuring geo-replication is as easy as clicking regions on a map. You can also manage geo-replication using tools including the [az acr replication](/cli/azure/acr/replication) commands in the Azure CLI, or deploy a registry enabled for geo-replication with an [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/101-container-registry-geo-replication/).
Geo-replication is a feature of [Premium registries](container-registry-skus.md). If your registry isn't yet Premium, you can change from Basic and Standard to Premium in the [Azure portal](https://portal.azure.com):
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-write-stored-procedures-triggers-udfs.md
function tax(income) {
For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#udfs) article.
-## Logging
+## Logging
-When using stored procedure, triggers or user-defined functions, you can log the steps using the `console.log()` command. This command will concentrate a string for debugging when `EnableScriptLogging` is set to true as shown in the following example:
+When using stored procedure, triggers or user-defined functions, you can log the steps by enabling the script logging. A string for debugging is generated when `EnableScriptLogging` is set to true as shown in the following examples:
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+let requestOptions = { enableScriptLogging: true };
+const { resource: result, headers: responseHeaders} await container.scripts
+ .storedProcedure(Sproc.id)
+ .execute(undefined, [], requestOptions);
+console.log(responseHeaders[Constants.HttpHeaders.ScriptLogResults]);
+```
+
+# [C#](#tab/csharp)
```csharp var response = await client.ExecuteStoredProcedureAsync(
document.SelfLink,
new RequestOptions { EnableScriptLogging = true } ); Console.WriteLine(response.ScriptLog); ```+ ## Next steps
cost-management-billing Cost Mgt Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/cost-mgt-best-practices.md
To learn more about the various options, visit [How to buy Azure](https://azure.
#### [Free](https://azure.microsoft.com/free/) - 12 months of popular free services-- USD200 credit in your billing currency to explore services for 30 days
+- $200 credit in your billing currency to explore services for 30 days
- 25+ services are always free #### [Pay as you go](https://azure.microsoft.com/offers/ms-azr-0003p)
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/avoid-charges-free-account.md
# Avoid charges with your Azure free account
-Eligible new users get USD200 Azure credit in your billing currency for the first 30 days and a limited quantity of free services for 12 months with your [Azure free account](https://azure.microsoft.com/free/). To learn about limits of free services, see the [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/). As long as you have unexpired credit or you use only free services within the limits, you're not charged.
+Eligible new users get $200 Azure credit in your billing currency for the first 30 days and a limited quantity of free services for 12 months with your [Azure free account](https://azure.microsoft.com/free/). To learn about limits of free services, see the [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/). As long as you have unexpired credit or you use only free services within the limits, you're not charged.
Let's look at some of the reasons you can incur charges on your Azure free account.
cost-management-billing Create Free Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/create-free-services.md
# Create services included with Azure free account
-During the first 30 days after you've created an Azure free account, you have USD200 credit in your billing currency to use on any service, except for third-party Marketplace purchases. You can experiment with different tiers and types of Azure services using the free credit to try out Azure. If you use services or Azure resources that arenΓÇÖt free during that time, charges are deducted against your credit.
+During the first 30 days after you've created an Azure free account, you have $200 credit in your billing currency to use on any service, except for third-party Marketplace purchases. You can experiment with different tiers and types of Azure services using the free credit to try out Azure. If you use services or Azure resources that arenΓÇÖt free during that time, charges are deducted against your credit.
If you donΓÇÖt use all of your credit by the end of the first 30 days, it's lost. After the first 30 days and up to 12 months after sign-up, you can only use a limited quantity of *some services*ΓÇönot all Azure services are free. If you upgrade before 30 days and have remaining credit, you can use the rest of your credit with a pay-as-you-go subscription for the remaining days. For example, if you sign up for the free account on November 1 and upgrade on November 5, you have until November 30 to use your credit in the new pay-as-you-go subscription.
cost-management-billing Spending Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/spending-limit.md
tags: billing
Previously updated : 03/29/2021 Last updated : 04/30/2021 # Azure spending limit
-The spending limit in Azure prevents spending over your credit amount. All new customers who sign up for an Azure free account or subscription types that include credits over multiple months have the spending limit turned on by default. The spending limit is equal to the amount of credit. You can't change the amount of the spending limit. For example, if you signed up for Azure free account, your spending limit is $200 and you can't change it to $500. However, you can remove the spending limit. So, you either have no limit, or you have a limit equal to the amount of credit. This prevents you from most kinds of spending. The spending limit isnΓÇÖt available for subscriptions with commitment plans or with pay-as-you-go pricing. See the [full list of Azure subscription types and the availability of the spending limit](https://azure.microsoft.com/support/legal/offer-details/).
+The spending limit in Azure prevents spending over your credit amount. All new customers who sign up for an Azure free account or subscription types that include credits over multiple months have the spending limit turned on by default. The spending limit is equal to the amount of credit. You can't change the amount of the spending limit. For example, if you signed up for Azure free account, your spending limit is $200 and you can't change it to $500. However, you can remove the spending limit. So, you either have no limit, or you have a limit equal to the amount of credit. This prevents you from most kinds of spending.
+
+The spending limit isnΓÇÖt available for subscriptions with commitment plans or with pay-as-you-go pricing. For those types of subscriptions, a spending limit isn't shown in the Azure portal and you can't enable one. See the [full list of Azure subscription types and the availability of the spending limit](https://azure.microsoft.com/support/legal/offer-details/).
## Reaching a spending limit
If you have an Azure free account, see [Upgrade your Azure subscription](upgrade
<a id="remove"></a> 1. Sign in to the [Azure portal](https://portal.azure.com) as the Account Administrator.
-1. Search for **Cost Management + Billing**.
-
- ![Screenshot that shows search for cost management + billing ](./media/spending-limit/search-bar.png)
-
-1. In the **My subscriptions** list, select your subscription. For example, *Visual Studio Enterprise*.
-
- ![Screenshot that shows my subscriptions grid in overview](./media/spending-limit/cost-management-overview-msdn-x.png)
-
+1. Search for **Cost Management + Billing**.
+ :::image type="content" source="./media/spending-limit/search-bar.png" alt-text="Screenshot that shows search for cost management + billing." lightbox="./media/spending-limit/search-bar.png" :::
+1. In the **My subscriptions** list, select your subscription. For example, *Visual Studio Enterprise*.
+ :::image type="content" source="./media/spending-limit/cost-management-overview-msdn-x.png" alt-text="Screenshot that shows my subscriptions grid in overview." lightbox="./media/spending-limit/cost-management-overview-msdn-x.png" :::
> [!NOTE] > If you don't see some of your Visual Studio subscriptions here, it might be because you changed a subscription directory at some point. For these subscriptions, you need to switch the directory to the original directory (the directory in which you initially signed up). Then, repeat step 2.-
-1. In the Subscription overview, click the orange banner to remove the spending limit.
-
- ![Screenshot that shows remove spending limit banner](./media/spending-limit/msdn-remove-spending-limit-banner-x.png)
-
-1. Choose whether you want to remove the spending limit indefinitely or for the current billing period only.
-
- ![Screenshot that shows remove spending limit blade](./media/spending-limit/remove-spending-limit-blade-x.png)
-
- | Option | Effect |
- | | |
- | Remove spending limit indefinitely | Spending limit does not automatically turn back on at the start of the next billing period. However, you can turn it back on yourself at any time. |
- | Remove spending limit for the current billing period | Spending limit automatically turns back on at the start of the next billing period. |
--
+1. In the Subscription overview, click the banner to remove the spending limit.
+ :::image type="content" source="./media/spending-limit/msdn-remove-spending-limit-banner-x.png" alt-text="Screenshot that shows remove spending limit banner." lightbox="./media/spending-limit/msdn-remove-spending-limit-banner-x.png" :::
+1. Choose whether you want to remove the spending limit indefinitely or for the current billing period only.
+ :::image type="content" source="./media/spending-limit/remove-spending-limit-blade-x.png" alt-text="Screenshot that shows remove spending limit page." lightbox="./media/spending-limit/remove-spending-limit-blade-x.png" :::
+ - Selecting the **Remove spending limit indefinitely** option prevents the spending limit from automatically getting enabled at the start of the next billing period. However, you can turn it back on yourself at any time.
+ - Selecting the **Remove spending limit for the current billing period** option automatically turns the spending limit back on at the start of the next billing period.
1. Click **Select payment method** to choose a payment method for your subscription. This will become the active payment method for your subscription.- 1. Click **Finish**.
The spending limit could prevent you from deploying or using certain third-party
This feature is available only when the spending limit has been removed indefinitely for subscription types that include credits over multiple months. You can use this feature to turn on your spending limit automatically at the start of the next billing period. - 1. Sign in to the [Azure portal](https://portal.azure.com) as the Account Administrator.
-1. Search for **Cost Management + Billing**.
-
- ![Screenshot that shows search for cost management + billing ](./media/spending-limit/search-bar.png)
-
-1. In the **My subscriptions** list, select your subscription. For example, *Visual Studio Enterprise*.
-
- ![Screenshot that shows my subscriptions grid in overview](./media/spending-limit/cost-management-overview-msdn-x.png)
-
+1. Search for **Cost Management + Billing**.
+ :::image type="content" source="./media/spending-limit/search-bar.png" alt-text="Screenshot that shows search for cost management + billing." lightbox="./media/spending-limit/search-bar.png" :::
+1. In the **My subscriptions** list, select your subscription. For example, *Visual Studio Enterprise*.
+ :::image type="content" source="./media/spending-limit/cost-management-overview-msdn-x.png" alt-text="Screenshot that shows my subscriptions grid in overview." lightbox="./media/spending-limit/cost-management-overview-msdn-x.png" :::
> [!NOTE] > If you don't see some of your Visual Studio subscriptions here, it might be because you changed a subscription directory at some point. For these subscriptions, you need to switch the directory to the original directory (the directory in which you initially signed up). Then, repeat step 2.- 1. In the Subscription overview, click the banner at the top of the page to turn the spending limit back on. ## Custom spending limit
cost-management-billing Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/subscription-disabled.md
Your Azure subscription can get disabled because your credit has expired, you re
## Your credit is expired
-When you sign up for an Azure free account, you get a Free Trial subscription, which provides you USD200 Azure credit in your billing currency for 30 days and 12 months of free services. At the end of 30 days, Azure disables your subscription. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit and free services included with your subscription. To continue using Azure services, you must [upgrade your subscription](upgrade-azure-subscription.md). After you upgrade, your subscription still has access to free services for 12 months. You only get charged for usage beyond the free service quantity limits.
+When you sign up for an Azure free account, you get a Free Trial subscription, which provides you $200 Azure credit in your billing currency for 30 days and 12 months of free services. At the end of 30 days, Azure disables your subscription. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit and free services included with your subscription. To continue using Azure services, you must [upgrade your subscription](upgrade-azure-subscription.md). After you upgrade, your subscription still has access to free services for 12 months. You only get charged for usage beyond the free service quantity limits.
## You reached your spending limit
cost-management-billing Upgrade Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/upgrade-azure-subscription.md
You can upgrade your [Azure free account](https://azure.microsoft.com/free/) to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) in the Azure portal.
-If you have an [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) and are eligible for an [Azure free account](https://azure.microsoft.com/free/), you can upgrade to it to a [Azure free account](https://azure.microsoft.com/free/). You'll get USD200 Azure credit in your billing currency and 12 months of free services on upgrade. If you don't qualify for a free account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+If you have an [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) and are eligible for an [Azure free account](https://azure.microsoft.com/free/), you can upgrade to it to a [Azure free account](https://azure.microsoft.com/free/). You'll get $200 Azure credit in your billing currency and 12 months of free services on upgrade. If you don't qualify for a free account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458).
If you have an [Azure for Students](https://azure.microsoft.com/offers/ms-azr-0170p/) account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458)
data-factory Continuous Integration Deployment Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
Follow these steps to get started:
"build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index" }, "dependencies":{
- "@microsoft/azure-data-factory-utilities":"^0.1.3"
+ "@microsoft/azure-data-factory-utilities":"^0.1.5"
} } ```
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
Previously updated : 11/06/2020 Last updated : 04/29/2021 # Customize the setup for an Azure-SSIS Integration Runtime
Last updated 11/06/2020
You can customize your Azure-SQL Server Integration Services (SSIS) Integration Runtime (IR) in Azure Data Factory (ADF) via custom setups. They allow you to add your own steps during the provisioning or reconfiguration of your Azure-SSIS IR.
-By using custom setups, you can alter the default operating configuration or environment of your Azure-SSIS IR. For example, to start additional Windows services, persist access credentials for file shares, or use strong cryptography/more secure network protocol (TLS 1.2). Or you can install additional components, such as assemblies, drivers, or extensions, on each node of your Azure-SSIS IR. They can be custom-made, Open Source, or 3rd party components. For more information about built-in/preinstalled components, see [Built-in/preinstalled components on Azure-SSIS IR](./built-in-preinstalled-components-ssis-integration-runtime.md).
+By using custom setups, you can alter the default operating configuration or environment of your Azure-SSIS IR. For example, to start additional Windows services, persist access credentials for file shares, or use only strong cryptography/more secure network protocol (TLS 1.2). Or you can install additional components, such as assemblies, drivers, or extensions, on each node of your Azure-SSIS IR. They can be custom-made, Open Source, or 3rd party components. For more information about built-in/preinstalled components, see [Built-in/preinstalled components on Azure-SSIS IR](./built-in-preinstalled-components-ssis-integration-runtime.md).
You can do custom setups on your Azure-SSIS IR in either of two ways:
-* **Standard custom setup with a script**: Prepare a script and its associated files, and upload them all together to a blob container in your Azure storage account. You then provide a Shared Access Signature (SAS) Uniform Resource Identifier (URI) for your container when you set up or reconfigure your Azure-SSIS IR. Each node of your Azure-SSIS IR then downloads the script and its associated files from your container and runs your custom setup with elevated permissions. When your custom setup is finished, each node uploads the standard output of execution and other logs to your container.
+* **Standard custom setup with a script**: Prepare a script and its associated files, and upload them all together to a blob container in your Azure Storage account. You then provide a Shared Access Signature (SAS) Uniform Resource Identifier (URI) for your blob container when you set up or reconfigure your Azure-SSIS IR. Each node of your Azure-SSIS IR then downloads the script and its associated files from your blob container and runs your custom setup with elevated permissions. When your custom setup is finished, each node uploads the standard output of execution and other logs to your blob container.
* **Express custom setup without a script**: Run some common system configurations and Windows commands or install some popular or recommended additional components without using any scripts. You can install both free (unlicensed) and paid (licensed) components with standard and express custom setups. If you're an independent software vendor (ISV), see [Develop paid or licensed components for Azure-SSIS IR](how-to-develop-azure-ssis-ir-licensed-components.md).
You can install both free (unlicensed) and paid (licensed) components with stand
The following limitations apply only to standard custom setups: -- If you want to use *gacutil.exe* in your script to install assemblies in the global assembly cache (GAC), you need to provide *gacutil.exe* as part of your custom setup. Or you can use the copy that's provided in the *Sample* folder of our *Public Preview* container, see the **Standard custom setup samples** section below.
+- If you want to use *gacutil.exe* in your script to install assemblies in the global assembly cache (GAC), you need to provide *gacutil.exe* as part of your custom setup. Or you can use the copy that's provided in the *Sample* folder of our Public Preview blob container, see the **Standard custom setup samples** section below.
- If you want to reference a subfolder in your script, *msiexec.exe* doesn't support the `.\` notation to reference the root folder. Use a command such as `msiexec /i "MySubfolder\MyInstallerx64.msi" ...` instead of `msiexec /i ".\MySubfolder\MyInstallerx64.msi" ...`.
To customize your Azure-SSIS IR, you need the following items:
- [Provision your Azure-SSIS IR](./tutorial-deploy-ssis-packages-azure.md) -- [An Azure storage account](https://azure.microsoft.com/services/storage/). Not required for express custom setups. For standard custom setups, you upload and store your custom setup script and its associated files in a blob container. The custom setup process also uploads its execution logs to the same blob container.
+- [An Azure Storage account](https://azure.microsoft.com/services/storage/). Not required for express custom setups. For standard custom setups, you upload and store your custom setup script and its associated files in a blob container. The custom setup process also uploads its execution logs to the same blob container.
## Instructions
To provision or reconfigure your Azure-SSIS IR with standard custom setups on AD
* You must have a script file named *main.cmd*, which is the entry point of your custom setup. * To ensure that the script can be silently executed, you should test it on your local machine first.
- * If you want additional logs generated by other tools (for example, *msiexec.exe*) to be uploaded to your container, specify the predefined environment variable, `CUSTOM_SETUP_SCRIPT_LOG_DIR`, as the log folder in your scripts (for example, *msiexec /i xxx.msi /quiet /lv %CUSTOM_SETUP_SCRIPT_LOG_DIR%\install.log*).
+ * If you want additional logs generated by other tools (for example, *msiexec.exe*) to be uploaded to your blob container, specify the predefined environment variable, `CUSTOM_SETUP_SCRIPT_LOG_DIR`, as the log folder in your scripts (for example, *msiexec /i xxx.msi /quiet /lv %CUSTOM_SETUP_SCRIPT_LOG_DIR%\install.log*).
1. Download, install, and open [Azure Storage Explorer](https://storageexplorer.com/).
- a. Under **(Local and Attached)**, right-click **Storage Accounts**, and then select **Connect to Azure storage**.
+ a. Under **Local and Attached**, right-click **Storage Accounts**, and then select **Connect to Azure Storage**.
![Connect to Azure Storage](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image1.png)
- b. Select **Use a storage account name and key**, and then select **Next**.
+ b. Select **Storage account or service**, select **Account name and key**, and then select **Next**.
- ![Use a storage account name and key](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image2.png)
-
- c. Enter your Azure storage account name and key, select **Next**, and then select **Connect**.
+ c. Enter your Azure Storage account name and key, select **Next**, and then select **Connect**.
![Provide storage account name and key](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image3.png)
- d. Under your connected Azure storage account, right-click **Blob Containers**, select **Create Blob Container**, and name the new container.
+ d. Under your connected Azure Storage account, right-click **Blob Containers**, select **Create Blob Container**, and name the new blob container.
![Create a blob container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image4.png)
- e. Select the new container, and upload your custom setup script and its associated files. Make sure that you upload *main.cmd* at the top level of your container, not in any folder. Your container should contain only the necessary custom setup files, so downloading them to your Azure-SSIS IR later won't take a long time. The maximum duration of a custom setup is currently set at 45 minutes before it times out. This includes the time to download all files from your container and install them on the Azure-SSIS IR. If setup requires more time, raise a support ticket.
+ e. Select the new blob container, and upload your custom setup script and its associated files. Make sure that you upload *main.cmd* at the top level of your blob container, not in any folder. Your blob container should contain only the necessary custom setup files, so downloading them to your Azure-SSIS IR later won't take a long time. The maximum duration of a custom setup is currently set at 45 minutes before it times out. This includes the time to download all files from your blob container and install them on the Azure-SSIS IR. If setup requires more time, raise a support ticket.
![Upload files to the blob container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image5.png)
- f. Right-click the container, and then select **Get Shared Access Signature**.
+ f. Right-click the blob container, and then select **Get Shared Access Signature**.
- ![Get the Shared Access Signature for the container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image6.png)
+ ![Get the Shared Access Signature for the blob container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image6.png)
- g. Create the SAS URI for your container with a sufficiently long expiration time and with read/write/list permission. You need the SAS URI to download and run your custom setup script and its associated files. This happens whenever any node of your Azure-SSIS IR is reimaged or restarted. You also need write permission to upload setup execution logs.
+ g. Create the SAS URI for your blob container with a sufficiently long expiration time and with read/write/list permission. You need the SAS URI to download and run your custom setup script and its associated files. This happens whenever any node of your Azure-SSIS IR is reimaged or restarted. You also need write permission to upload setup execution logs.
> [!IMPORTANT] > Ensure that the SAS URI doesn't expire and the custom setup resources are always available during the whole lifecycle of your Azure-SSIS IR, from creation to deletion, especially if you regularly stop and start your Azure-SSIS IR during this period.
- ![Generate the Shared Access Signature for the container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image7.png)
+ ![Generate the Shared Access Signature for the blob container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image7.png)
- h. Copy and save the SAS URI of your container.
+ h. Copy and save the SAS URI of your blob container.
![Copy and save the Shared Access Signature](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image8.png)
-1. Select the **Customize your Azure-SSIS Integration Runtime with additional system configurations/component installations** check box on the **Advanced settings** page of **Integration runtime setup** pane. Next, enter the SAS URI of your container in the **Custom setup container SAS URI** text box.
+1. Select the **Customize your Azure-SSIS Integration Runtime with additional system configurations/component installations** check box on the **Advanced settings** page of **Integration runtime setup** pane. Next, enter the SAS URI of your blob container in the **Custom setup container SAS URI** text box.
![Advanced settings with custom setups](./media/tutorial-create-azure-ssis-runtime-portal/advanced-settings-custom.png)
-After your standard custom setup finishes and your Azure-SSIS IR starts, you can find all custom setup logs in the *main.cmd.log* folder of your container. They include the standard output of *main.cmd* and other execution logs.
+After your standard custom setup finishes and your Azure-SSIS IR starts, you can find all custom setup logs in the *main.cmd.log* folder of your blob container. They include the standard output of *main.cmd* and other execution logs.
### Express custom setup
To provision or reconfigure your Azure-SSIS IR with custom setups using Azure Po
To view and reuse some samples of standard custom setups, complete the following steps.
-1. Connect to our Public Preview container using Azure Storage Explorer.
+1. Connect to our Public Preview blob container using Azure Storage Explorer.
- a. Under **(Local and Attached)**, right-click **Storage Accounts**, select **Connect to Azure storage**, select **Use a connection string or a shared access signature URI**, and then select **Next**.
+ a. Under **Local and Attached**, right-click **Storage Accounts**, and then select **Connect to Azure Storage**.
- ![Connect to Azure storage with the Shared Access Signature](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image9.png)
+ ![Connect to Azure Storage](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image1.png)
- b. Select **Use a SAS URI** and then, in the **URI** text box, enter the following SAS URI:
+ b. Select **Blob container**, select **Shared access signature URL (SAS)**, and then select **Next**.
- `https://ssisazurefileshare.blob.core.windows.net/publicpreview?sp=rl&st=2020-03-25T04:00:00Z&se=2025-03-25T04:00:00Z&sv=2019-02-02&sr=c&sig=WAD3DATezJjhBCO3ezrQ7TUZ8syEUxZZtGIhhP6Pt4I%3D`
+ c. In the **Blob container SAS URL** text box, enter the SAS URI for our Public Preview blob container below, select **Next**, and then select **Connect**.
- ![Provide the Shared Access Signature for the container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image10.png)
-
- c. Select **Next**, and then select **Connect**.
+ `https://ssisazurefileshare.blob.core.windows.net/publicpreview?sp=rl&st=2020-03-25T04:00:00Z&se=2025-03-25T04:00:00Z&sv=2019-02-02&sr=c&sig=WAD3DATezJjhBCO3ezrQ7TUZ8syEUxZZtGIhhP6Pt4I%3D`
- d. In the left pane, select the connected **publicpreview** container, and then double-click the *CustomSetupScript* folder. In this folder are the following items:
+ d. In the left pane, select the connected **publicpreview** blob container, and then double-click the *CustomSetupScript* folder. In this folder are the following items:
- * A *Sample* folder, which contains a custom setup to install a basic task on each node of your Azure-SSIS IR. The task does nothing but sleep for a few seconds. The folder also contains a *gacutil* folder, whose entire contents (*gacutil.exe*, *gacutil.exe.config*, and *1033\gacutlrc.dll*) can be copied as is to your container.
+ * A *Sample* folder, which contains a custom setup to install a basic task on each node of your Azure-SSIS IR. The task does nothing but sleep for a few seconds. The folder also contains a *gacutil* folder, whose entire content (*gacutil.exe*, *gacutil.exe.config*, and *1033\gacutlrc.dll*) can be copied as is to your blob container.
- * A *UserScenarios* folder, which contains several custom setup samples from real user scenarios. If you want to install multiple samples on your Azure-SSIS IR, you can combine their custom setup script (*main.cmd*) files into a single one and upload it with all of their associated files into your container.
+ * A *UserScenarios* folder, which contains several custom setup samples from real user scenarios. If you want to install multiple samples on your Azure-SSIS IR, you can combine their custom setup script (*main.cmd*) files into a single one and upload it with all of their associated files into your blob container.
- ![Contents of the public preview container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image11.png)
+ ![Contents of the public preview blob container](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image11.png)
e. Double-click the *UserScenarios* folder to find the following items:
To view and reuse some samples of standard custom setups, complete the following
* An *EXCEL* folder, which contains a custom setup script (*main.cmd*) to install some C# assemblies and libraries on each node of your Azure-SSIS IR. You can use them in Script Tasks to dynamically read and write Excel files.
- First, download [*ExcelDataReader.dll*](https://www.nuget.org/packages/ExcelDataReader/) and [*DocumentFormat.OpenXml.dll*](https://www.nuget.org/packages/DocumentFormat.OpenXml/), and then upload them all together with *main.cmd* to your container. Alternatively, if you just want to use the standard Excel connectors (Connection Manager, Source, and Destination), the Access Redistributable that contains them is already preinstalled on your Azure-SSIS IR, so you don't need any custom setup.
+ First, download [*ExcelDataReader.dll*](https://www.nuget.org/packages/ExcelDataReader/) and [*DocumentFormat.OpenXml.dll*](https://www.nuget.org/packages/DocumentFormat.OpenXml/), and then upload them all together with *main.cmd* to your blob container. Alternatively, if you just want to use the standard Excel connectors (Connection Manager, Source, and Destination), the Access Redistributable that contains them is already preinstalled on your Azure-SSIS IR, so you don't need any custom setup.
* A *MYSQL ODBC* folder, which contains a custom setup script (*main.cmd*) to install the MySQL ODBC drivers on each node of your Azure-SSIS IR. This setup lets you use the ODBC connectors (Connection Manager, Source, and Destination) to connect to the MySQL server.
- First, [download the latest 64-bit and 32-bit versions of the MySQL ODBC driver installers](https://dev.mysql.com/downloads/connector/odbc/) (for example, *mysql-connector-odbc-8.0.13-winx64.msi* and *mysql-connector-odbc-8.0.13-win32.msi*), and then upload them all together with *main.cmd* to your container.
+ First, [download the latest 64-bit and 32-bit versions of the MySQL ODBC driver installers](https://dev.mysql.com/downloads/connector/odbc/) (for example, *mysql-connector-odbc-8.0.13-winx64.msi* and *mysql-connector-odbc-8.0.13-win32.msi*), and then upload them all together with *main.cmd* to your blob container.
* An *ORACLE ENTERPRISE* folder, which contains a custom setup script (*main.cmd*) and silent installation config file (*client.rsp*) to install the Oracle connectors and OCI driver on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the Oracle Connection Manager, Source, and Destination to connect to the Oracle server.
- First, download Microsoft Connectors v5.0 for Oracle (*AttunitySSISOraAdaptersSetup.msi* and *AttunitySSISOraAdaptersSetup64.msi*) from [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=55179) and the latest Oracle client (for example, *winx64_12102_client.zip*) from [Oracle](https://www.oracle.com/database/technologies/oracle19c-windows-downloads.html). Next, upload them all together with *main.cmd* and *client.rsp* to your container. If you use TNS to connect to Oracle, you also need to download *tnsnames.ora*, edit it, and upload it to your container. In this way, it can be copied to the Oracle installation folder during setup.
+ First, download Microsoft Connectors v5.0 for Oracle (*AttunitySSISOraAdaptersSetup.msi* and *AttunitySSISOraAdaptersSetup64.msi*) from [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=55179) and the latest Oracle client (for example, *winx64_12102_client.zip*) from [Oracle](https://www.oracle.com/database/technologies/oracle19c-windows-downloads.html). Next, upload them all together with *main.cmd* and *client.rsp* to your blob container. If you use TNS to connect to Oracle, you also need to download *tnsnames.ora*, edit it, and upload it to your blob container. In this way, it can be copied to the Oracle installation folder during setup.
* An *ORACLE STANDARD ADO.NET* folder, which contains a custom setup script (*main.cmd*) to install the Oracle ODP.NET driver on each node of your Azure-SSIS IR. This setup lets you use the ADO.NET Connection Manager, Source, and Destination to connect to the Oracle server.
- First, [download the latest Oracle ODP.NET driver](https://www.oracle.com/technetwork/database/windows/downloads/index-090165.html) (for example, *ODP.NET_Managed_ODAC122cR1.zip*), and then upload it together with *main.cmd* to your container.
+ First, [download the latest Oracle ODP.NET driver](https://www.oracle.com/technetwork/database/windows/downloads/index-090165.html) (for example, *ODP.NET_Managed_ODAC122cR1.zip*), and then upload it together with *main.cmd* to your blob container.
* An *ORACLE STANDARD ODBC* folder, which contains a custom setup script (*main.cmd*) to install the Oracle ODBC driver on each node of your Azure-SSIS IR. The script also configures the Data Source Name (DSN). This setup lets you use the ODBC Connection Manager, Source, and Destination or Power Query Connection Manager and Source with the ODBC data source type to connect to the Oracle server.
- First, download the latest Oracle Instant Client (Basic Package or Basic Lite Package) and ODBC Package, and then upload them all together with *main.cmd* to your container:
+ First, download the latest Oracle Instant Client (Basic Package or Basic Lite Package) and ODBC Package, and then upload them all together with *main.cmd* to your blob container:
* [Download 64-bit packages](https://www.oracle.com/technetwork/topics/winx64soft-089540.html) (Basic Package: *instantclient-basic-windows.x64-18.3.0.0.0dbru.zip*; Basic Lite Package: *instantclient-basiclite-windows.x64-18.3.0.0.0dbru.zip*; ODBC Package: *instantclient-odbc-windows.x64-18.3.0.0.0dbru.zip*) * [Download 32-bit packages](https://www.oracle.com/technetwork/topics/winsoft-085727.html) (Basic Package: *instantclient-basic-nt-18.3.0.0.0dbru.zip*; Basic Lite Package: *instantclient-basiclite-nt-18.3.0.0.0dbru.zip*; ODBC Package: *instantclient-odbc-nt-18.3.0.0.0dbru.zip*) * An *ORACLE STANDARD OLEDB* folder, which contains a custom setup script (*main.cmd*) to install the Oracle OLEDB driver on each node of your Azure-SSIS IR. This setup lets you use the OLEDB Connection Manager, Source, and Destination to connect to the Oracle server.
- First, [download the latest Oracle OLEDB driver](https://www.oracle.com/partners/campaign/index-090165.html) (for example, *ODAC122010Xcopy_x64.zip*), and then upload it together with *main.cmd* to your container.
+ First, [download the latest Oracle OLEDB driver](https://www.oracle.com/partners/campaign/index-090165.html) (for example, *ODAC122010Xcopy_x64.zip*), and then upload it together with *main.cmd* to your blob container.
* A *POSTGRESQL ODBC* folder, which contains a custom setup script (*main.cmd*) to install the PostgreSQL ODBC drivers on each node of your Azure-SSIS IR. This setup lets you use the ODBC Connection Manager, Source, and Destination to connect to the PostgreSQL server.
- First, [download the latest 64-bit and 32-bit versions of PostgreSQL ODBC driver installers](https://www.postgresql.org/ftp/odbc/versions/msi/) (for example, *psqlodbc_x64.msi* and *psqlodbc_x86.msi*), and then upload them all together with *main.cmd* to your container.
+ First, [download the latest 64-bit and 32-bit versions of PostgreSQL ODBC driver installers](https://www.postgresql.org/ftp/odbc/versions/msi/) (for example, *psqlodbc_x64.msi* and *psqlodbc_x86.msi*), and then upload them all together with *main.cmd* to your blob container.
* A *SAP BW* folder, which contains a custom setup script (*main.cmd*) to install the SAP .NET connector assembly (*librfc32.dll*) on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the SAP BW Connection Manager, Source, and Destination to connect to the SAP BW server.
- First, upload the 64-bit or the 32-bit version of *librfc32.dll* from the SAP installation folder together with *main.cmd* to your container. The script then copies the SAP assembly to the *%windir%\SysWow64* or *%windir%\System32* folder during setup.
+ First, upload the 64-bit or the 32-bit version of *librfc32.dll* from the SAP installation folder together with *main.cmd* to your blob container. The script then copies the SAP assembly to the *%windir%\SysWow64* or *%windir%\System32* folder during setup.
* A *STORAGE* folder, which contains a custom setup script (*main.cmd*) to install Azure PowerShell on each node of your Azure-SSIS IR. This setup lets you deploy and run SSIS packages that run [Azure PowerShell cmdlets/scripts to manage your Azure Storage](../storage/blobs/storage-quickstart-blobs-powershell.md).
- Copy *main.cmd*, a sample *AzurePowerShell.msi* (or use the latest version), and *storage.ps1* to your container. Use *PowerShell.dtsx* as a template for your packages. The package template combines an [Azure Blob Download Task](/sql/integration-services/control-flow/azure-blob-download-task), which downloads a modifiable PowerShell script (*storage.ps1*), and an [Execute Process Task](https://blogs.msdn.microsoft.com/ssis/2017/01/26/run-powershell-scripts-in-ssis/), which executes the script on each node.
+ Copy *main.cmd*, a sample *AzurePowerShell.msi* (or use the latest version), and *storage.ps1* to your blob container. Use *PowerShell.dtsx* as a template for your packages. The package template combines an [Azure Blob Download Task](/sql/integration-services/control-flow/azure-blob-download-task), which downloads a modifiable PowerShell script (*storage.ps1*), and an [Execute Process Task](https://blogs.msdn.microsoft.com/ssis/2017/01/26/run-powershell-scripts-in-ssis/), which executes the script on each node.
* A *TERADATA* folder, which contains a custom setup script (*main.cmd*), its associated file (*install.cmd*), and installer packages (*.msi*). These files install the Teradata connectors, the Teradata Parallel Transporter (TPT) API, and the ODBC driver on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the Teradata Connection Manager, Source, and Destination to connect to the Teradata server.
- First, [download the Teradata Tools and Utilities 15.x zip file](http://partnerintelligence.teradata.com) (for example, *TeradataToolsAndUtilitiesBase__windows_indep.15.10.22.00.zip*), and then upload it together with the previously mentioned *.cmd* and *.msi* files to your container.
+ First, [download the Teradata Tools and Utilities 15.x zip file](http://partnerintelligence.teradata.com) (for example, *TeradataToolsAndUtilitiesBase__windows_indep.15.10.22.00.zip*), and then upload it together with the previously mentioned *.cmd* and *.msi* files to your blob container.
- * A *TLS 1.2* folder, which contains a custom setup script (*main.cmd*) to use strong cryptography and more secure network protocol (TLS 1.2) on each node of your Azure-SSIS IR. The script also disables older SSL/TLS versions.
+ * A *TLS 1.2* folder, which contains a custom setup script (*main.cmd*) to use only strong cryptography/more secure network protocol (TLS 1.2) on each node of your Azure-SSIS IR. The script also disables older SSL/TLS versions (SSL 3.0, TLS 1.0, TLS 1.1) at the same time.
* A *ZULU OPENJDK* folder, which contains a custom setup script (*main.cmd*) and PowerShell file (*install_openjdk.ps1*) to install the Zulu OpenJDK on each node of your Azure-SSIS IR. This setup lets you use Azure Data Lake Store and Flexible File connectors to process ORC and Parquet files. For more information, see [Azure Feature Pack for Integration Services](/sql/integration-services/azure-feature-pack-for-integration-services-ssis#dependency-on-java).
- First, [download the latest Zulu OpenJDK](https://www.azul.com/downloads/zulu/zulu-windows/) (for example, *zulu8.33.0.1-jdk8.0.192-win_x64.zip*), and then upload it together with *main.cmd* and *install_openjdk.ps1* to your container.
+ First, [download the latest Zulu OpenJDK](https://www.azul.com/downloads/zulu/zulu-windows/) (for example, *zulu8.33.0.1-jdk8.0.192-win_x64.zip*), and then upload it together with *main.cmd* and *install_openjdk.ps1* to your blob container.
![Folders in the user scenarios folder](media/how-to-configure-azure-ssis-ir-custom-setup/custom-setup-image12.png)
- f. To reuse these standard custom setup samples, copy the content of selected folder to your container.
+ f. To reuse these standard custom setup samples, copy the content of selected folder to your blob container.
-1. When you provision or reconfigure your Azure-SSIS IR on ADF UI, select the **Customize your Azure-SSIS Integration Runtime with additional system configurations/component installations** check box on the **Advanced settings** page of **Integration runtime setup** pane. Next, enter the SAS URI of your container in the **Custom setup container SAS URI** text box.
+1. When you provision or reconfigure your Azure-SSIS IR on ADF UI, select the **Customize your Azure-SSIS Integration Runtime with additional system configurations/component installations** check box on the **Advanced settings** page of **Integration runtime setup** pane. Next, enter the SAS URI of your blob container in the **Custom setup container SAS URI** text box.
-1. When you provision or reconfigure your Azure-SSIS IR using Azure PowerShell, stop it if it's already started/running, run the `Set-AzDataFactoryV2IntegrationRuntime` cmdlet with the SAS URI of your container as the value for `SetupScriptContainerSasUri` parameter, and then start your Azure-SSIS IR.
+1. When you provision or reconfigure your Azure-SSIS IR using Azure PowerShell, stop it if it's already started/running, run the `Set-AzDataFactoryV2IntegrationRuntime` cmdlet with the SAS URI of your blob container as the value for `SetupScriptContainerSasUri` parameter, and then start your Azure-SSIS IR.
-1. After your standard custom setup finishes and your Azure-SSIS IR starts, you can find all custom setup logs in the *main.cmd.log* folder of your container. They include the standard output of *main.cmd* and other execution logs.
+1. After your standard custom setup finishes and your Azure-SSIS IR starts, you can find all custom setup logs in the *main.cmd.log* folder of your blob container. They include the standard output of *main.cmd* and other execution logs.
## Next steps
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors. Previously updated : 4/6/2021 Last updated : 04/29/2021
Activation and setup of the on-premises management console ensures that:
## Sign in for the first time
-To sign in to the management console:
+**To sign in to the management console:**
1. Navigate to the IP address you received for the on-premises management console during the system installation.
If you forgot your password, select the **Recover Password** option, and see [P
After you sign in for the first time, you will need to activate the on-premises management console by getting, and uploading an activation file.
-To activate the on-premises management console:
+**To activate the on-premises management console:**
1. Sign in to the on-premises management console.
To activate the on-premises management console:
After initial activation, the number of monitored devices can exceed the number of committed devices defined during onboarding. This occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices, and the number of committed devices, a warning will appear on the management console. If this happens, upload a new activation file.
+### Activate an expired license (versions under 10.0)
+
+For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
++
+**To activate your license:**
+
+1. Open a case with [support](https://ms.portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support)..
+
+1. Supply support with your Activation ID number.
+
+1. Support will supply you with new license information in the form of a string of letters.
+
+1. Read the terms and conditions, and check the checkbox to approve.
+
+1. Paste the string into space provided.
+
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Paste the string into the provided field.":::
+
+1. Select **Activate**.
+ ## Set up a certificate After you install the management console, a local self-signed certificate is generated. This certificate is used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
After you install the management console, a local self-signed certificate is gen
Two levels of security are available: - Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.+ - Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.* This option is enabled by default after installation. The console supports the following types of certificates:
The console supports the following types of certificates:
> [!IMPORTANT] > We recommend that you don't use a self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
-To upload a certificate:
+**To upload a certificate:**
1. When you're prompted after sign-in, define a certificate name.
To upload a certificate:
You may need to refresh your screen after you upload the CA-signed certificate.
-To disable validation between the management console and connected sensors:
+**To disable validation between the management console and connected sensors:**
1. Select **Next**.
For information about uploading a new certificate, supported certificate files,
## Connect sensors to the on-premises management console
-You must ensure that sensors send information to the on-premises management console, and that the on-premises management console can perform backups, manage alerts, and carry out other activity on the sensors. To do that, use the following procedures to verify that you make an initial connection between sensors and the on-premises management console.
+Ensure that sensors send information to the on-premises management console, and that the on-premises management console can perform backups, manage alerts, and carry out other activity on the sensors. To do that, use the following procedures to verify that you make an initial connection between sensors and the on-premises management console.
Two options are available for connecting Azure Defender for IoT sensors to the on-premises management console:
After connecting, you must set up a site with these sensors.
### Connect sensors to the on-premises management console from the sensor console
-You can connect sensors to the on-premises management console from the sensor console:
+**To connect sensors to the on-premises management console from the sensor console:**
1. On the on-premises management console, select **System Settings**.
Enable a secured tunneling connection between organizational sensors and the on-
Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (that is, 9000) to any sensor.
-To set up tunneling at the on-premises management console:
+**To set up tunneling at the on-premises management console:**
- Sign in to the on-premises management console and run the following commands:
To set up tunneling at the on-premises management console:
service apache2 reload ```
-To set up tunneling on the sensor:
+**To set up tunneling on the sensor:**
1. Open TCP port 9000 on the sensor (network.properties) manually. If the port is not open, the sensor will reject the connection from the on-premises management console.
Access groups enable better control over where users manage and analyze devices
You can define a business unit, and a region for each site in your organization. You can then add zones, which are logical entities that exist in your network.
-You should assign at least one sensor per zone. The five-level model provides the flexibility and granularity required to deliver the protection system that reflects the structure of your organization.
+Assign at least one sensor per zone. The five-level model provides the flexibility and granularity required to deliver the protection system that reflects the structure of your organization.
:::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/diagram-of-sensor-showing-relationships.png" alt-text="Diagram showing sensors and regional relationship.":::
Using the Enterprise View, you can edit your sites directly. When you select a s
:::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/console-map-with-data-overlay-v2.png" alt-text="Screenshot of an on-premises management console map with Berlin data overlay.":::
-To set up a site:
+**To set up a site:**
1. Add new business units to reflect your organization's logical structure.
To set up a site:
1. Enter the new business unit name and select **ADD**.
-1. Add a new regions to reflect your organization's regions.
+1. Add new regions to reflect your organization's regions.
1. From the Enterprise View, select **All Regions** > **Manage Regions**.
To set up a site:
If you no longer need a site, you can delete it from your on-premises management console.
-To delete a site:
+**To delete a site:**
1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name, and then select **Delete Site**. The confirmation box appears, verifying that you want to delete the site.
The following table describes the parameters in the **Site Management** window.
| :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/number-of-alerts-icon.png" border="false"::: | Indicates the number of alerts sent by sensors that are assigned to the zone. | | :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false"::: | Unassigns sensors from zones. |
-To add a zone to a site:
+**To add a zone to a site:**
1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name, and then select **Add Zone**. The **Create New Zone** dialog box appears.
To add a zone to a site:
1. Select **SAVE**. The new zone appears in the **Site Management** window under the site that this zone belongs to.
-To edit a zone:
+**To edit a zone:**
1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the zone name, and then select **Edit Zone**. The **Edit Zone** dialog box appears.
To edit a zone:
1. Edit the zone parameters and select **SAVE**.
-To delete a zone:
+**To delete a zone:**
1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the zone name, and then select **Delete Zone**. 1. In the confirmation box, select **YES**.
-To filter according to the connectivity status:
+**To filter according to the connectivity status:**
- From the upper-left corner, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/down-pointing-icon.png" border="false"::: next to **Connectivity**, and then select one of the following options:
To filter according to the connectivity status:
- **Disconnected**: Presents only disconnected sensors.
-To filter according to the upgrade status:
+**To filter according to the upgrade status:**
- From the upper-left corner, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/down-pointing-icon.png" border="false"::: next to **Upgrade Status** and select one of the following options:
To filter according to the upgrade status:
For each zone, you need to assign sensors that perform local traffic analysis and alerting. You can assign only the sensors that are connected to the on-premises management console.
-To assign a sensor:
+**To assign a sensor:**
1. Select **Site Management**. The unassigned sensors appear in the upper-left corner of the dialog box.
To assign a sensor:
1. Select **ASSIGN**.
-To unassign and delete a sensor:
+**To unassign and delete a sensor:**
1. Disconnect the sensor from the on-premises management console. See [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for details.
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
Title: Activate and set up your sensor description: This article describes how to sign in and activate a sensor console. Previously updated : 1/12/2021 Last updated : 04/29/2021
The console supports the following certificate types:
### Sign in and activate the sensor
-To sign in and activate:
+**To sign in and activate:**
1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens.
For information about uploading a new certificate, supported certificate paramet
#### Update sensor network configuration before activation
-The sensor network configuration parameters were defined during the software installation or when you purchased a preconfigured sensor. The following parameters were defined:
+The sensor network configuration parameters were defined during the software installation, or when you purchased a preconfigured sensor. The following parameters were defined:
- IP address - DNS
The sensor network configuration parameters were defined during the software ins
You might want to update this information before activating the sensor. For example, you might need to change the preconfigured parameters defined by Arrow. You can also define proxy settings before activating your sensor.
-To update sensor network configuration parameters:
+**To update sensor network configuration parameters:**
1. Select the **Sensor Network Configuration** link form the **Activation** dialog box.
To update sensor network configuration parameters:
2. The parameters defined during installation are displayed. The option to define the proxy is also available. Update any settings as required and select **Save**.
+### Activate an expired license (versions under 10.0)
+
+For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
++
+**To activate your license:**
+
+1. Open a case with [support](https://ms.portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support).
+
+1. Supply support with your Activation ID number.
+
+1. Support will supply you with new license information in the form of a string of letters.
+
+1. Read the terms and conditions, and check the checkbox to approve.
+
+1. Paste the string into space provided.
+
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Paste the string into the provided field.":::
+
+1. Select **Activate**.
+ ### Subsequent sign-ins After first-time activation, the Azure Defender for IoT sensor console opens after sign-in without requiring an activation file. You need only your sign-in credentials.
After adjusting the system settings, you can let the Azure Defender for IoT sens
The learning mode should run for about 2 to 6 weeks, depending on your network size and complexity. After you disable learning mode, any activity that differs from your baseline activity will trigger an alert.
-To disable learning mode:
+**To disable learning mode:**
- Select **System Settings** and turn off the **Learning** option.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
To install the software:
:::image type="content" source="media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png" alt-text="Screenshot that shows the management console's sign-in screen.":::
+## Legacy devices
+
+This section describes devices that are no longer available for purchase, but are still supported by Azure Defender for IoT.
+
+### Nuvo 5006LP installation
+
+This section provides the Nuvo 5006LP installation procedure. Before installing the software on the Nuvo 5006LP appliance, you need to adjust the appliance BIOS configuration.
+
+#### Nuvo 5006LP front panel
++
+1. Power button, Power indicator
+1. DVI video connectors
+1. HDMI video connectors
+1. VGA video connectors
+1. Remote on/off Control, and status LED output
+1. Reset button
+1. Management network adapter
+1. Ports to receive mirrored data
+
+#### Nuvo back panel
++
+1. SIM card slot
+1. Microphone, and speakers
+1. COM ports
+1. USB connectors
+1. DC power port (DC IN)
+
+#### Configure the Nuvo 5006LP BIOS
+
+The following procedure describes how to configure the Nuvo 5006LP BIOS. Make sure the operating system was previously installed on the appliance.
+
+To configure the BIOS:
+
+1. Power on the appliance.
+
+1. Press **F2** to enter the BIOS configuration.
+
+1. Navigate to **Power** and change Power On after Power Failure to S0-Power On.
+
+ :::image type="content" source="media/tutorial-install-components/nuvo-power-on.png" alt-text="Change you Nuvo 5006 to power on after a power failure..":::
+
+1. Navigate to **Boot** and ensure that **PXE Boot to LAN** is set to **Disabled**.
+
+1. Press **F10** to save, and then select **Exit**.
+
+#### Software installation (Nuvo 5006LP)
+
+The installation process takes approximately 20 minutes. After installation, the system is restarted several times.
+
+1. Connect the external CD, or disk on key with the ISO image.
+
+1. Boot the appliance.
+
+1. Select **English**.
+
+1. Select **XSENSE-RELEASE-<version> Office...**.
+
+ :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select the version of the sensor to install.":::
+
+1. Define the appliance architecture, and network properties:
+
+ :::image type="content" source="media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Define the Nuvo's architecture and network properties.":::
+
+ | Parameter | Configuration |
+ | -| - |
+ | **Hardware profile** | Select **office**. |
+ | **Management interface** | **eth0** |
+ | **Management network IP address** | **IP address provided by the customer** |
+ | **Management subnet mask** | **IP address provided by the customer** |
+ | **DNS** | **IP address provided by the customer** |
+ | **Default gateway IP address** | **0.0.0.0** |
+ | **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. |
+ | **Bridge interface** | - |
+
+1. Accept the settings and continue by entering `Y`.
+
+After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
+
+### Fitlet2 mini sensor Installation
+
+This section provides the Fitlet2 installation procedure. Before installing the software on the Fitlet appliance, you need to adjust the appliance's BIOS configuration.
+
+#### Fitlet2 front panel
++
+#### Fitlet2 back panel
++
+#### Configure the Fitlet2 BIOS
+
+1. Power on the appliance.
+
+1. Navigate to **Main** > **OS Selection**.
+
+1. Press **+/-** to select **Linux**.
+
+ :::image type="content" source="media/tutorial-install-components/fitlet-linux.png" alt-text="Set the OS to Linux on your Fitlet2.":::
+
+1. Verify that the system date, and time are updated with the installation date, and time.
+
+1. Navigate to **Advanced**, and select **ACPI Settings**.
+
+1. Select **Enable Hibernation**, and press **+/-** to select **Disabled**.
+
+ :::image type="content" source="media/tutorial-install-components/disable-hibernation.png" alt-text="Diable the hibernation mode on your Fitlet2.":::
+
+1. Press **Esc**.
+
+1. Navigate to **Advanced** > **TPM Configuration**.
+
+1. Select **fTPM**, and press **+/-** to select **Disabled**.
+
+1. Press **Esc**.
+
+1. Navigate to **CPU Configuration** > **VT-d**.
+
+1. Press **+/-** to select **Enabled**.
+
+1. Navigate to **CSM Configuration** > **CSM Support**.
+
+1. Press **+/-** to select **Enabled**.
+1. Navigate to **Advanced** > **Boot option filter [Legacy only]** and change setting in the following fields to **Legacy**:
+ - Network
+ - Storage
+ - Video
+ - Other PCI
+
+ :::image type="content" source="media/tutorial-install-components/legacy-only.png" alt-text="Set all fields to Legacy.":::
+
+1. Press **Esc**.
+
+1. Navigate to **Security** > **Secure Boot Customization**.
+
+1. Press **+/-** to select **Disabled**.
+
+1. Press **Esc**.
+
+1. Navigate to **Boot** > **Boot mode** select, and select **Legacy**.
+
+1. Select **Boot Option #1 ΓÇô [USB CD/DVD]**.
+
+1. Select **Save & Exit**.
+
+#### Software installation (Fitlet2)
+
+The installation process takes approximately 20 minutes. After installation, the system is restarted several times.
+
+1. Connect the external CD, or disk on key with the ISO image.
+
+1. Boot the appliance.
+
+1. Select **English**.
+
+1. Select **XSENSE-RELEASE-<version> Office...**.
+
+ :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select the version of the sensor to install.":::
+
+ > [!Note]
+ > Do not select Ruggedized.
+
+1. Define the appliance architecture, and network properties:
+
+ :::image type="content" source="media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Define the Nuvo's architecture and network properties.":::
+
+ | Parameter | Configuration |
+ | -| - |
+ | **Hardware profile** | Select **office**. |
+ | **Management interface** | **em1** |
+ | **Management network IP address** | **IP address provided by the customer** |
+ | **Management subnet mask** | **IP address provided by the customer** |
+ | **DNS** | **IP address provided by the customer** |
+ | **Default gateway IP address** | **0.0.0.0** |
+ | **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. |
+ | **Bridge interface** | - |
+
+1. Accept the settings and continue by entering `Y`.
+
+After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
+ ## Post-installation validation To validate the installation of a physical appliance, you need to perform many tests. The same validation process applies to all the appliance types.
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
The following procedure describes how to update a standalone sensor by using the
5. In the sensor console's sidebar, select **System Settings**.
-6. On the **Version Upgrade** pane, select **Upgrade**.
+6. On the **Version Update** pane, select **Update**.
- :::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the upgrade pane.":::
+ :::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the update pane.":::
7. Select the file that you downloaded from the Defender for IoT **Updates** page.
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-from-the-on-premises-management-console.md
Title: Manage sensors from the on-premises management console description: Learn how to manage sensors from the management console, including updating sensor versions, pushing system settings to sensors, and enabling and disabling engines on sensors. Previously updated : 12/07/2020 Last updated : 04/22/2021
To apply system settings:
1. On the console's left pane, select **System Settings**.
-2. On the **Configure Sensors** pane, select one of the options.
+1. On the **Configure Sensors** pane, select one of the options.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensor-system-setting-options.png" alt-text="The system setting options for a sensor."::: The following example describes how to define mail server parameters for your enterprise sensors.
-3. Select **Mail Server**.
+1. Select **Mail Server**.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/edit-system-settings-screen.png" alt-text="Select your mail server from the System Settings screen.":::
-4. Select a sensor on the left.
+1. Select a sensor on the left.
-5. Set the mail server parameters and select **Duplicate**. Each item in the sensor tree appears with a check box next to it.
+1. Set the mail server parameters and select **Duplicate**. Each item in the sensor tree appears with a check box next to it.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/check-off-each-sensor.png" alt-text="Ensure the check boxes are selected for your sensors.":::
-6. In the sensor tree, select the items to which you want to apply the configuration.
+1. In the sensor tree, select the items to which you want to apply the configuration.
-7. Select **Save**.
+1. Select **Save**.
## Update versions
To update several sensors:
1. Go to the [Azure portal](https://portal.azure.com/).
-2. Go to Azure Defender for IoT.
+1. Navigate to Azure Defender for IoT.
-3. Go to the **Updates** page.
+1. Go to the **Updates** page.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/update-screen.png" alt-text="Screenshot of the Updates dashboard view.":::
-4. Select **Download** from the **Sensors** section and save the file.
+1. Select **Download** from the **Sensors** section and save the file.
-5. Sign in to the management console and select **System Settings**.
+1. Sign in to the management console and select **System Settings**.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/admin-system-settings.png" alt-text="Screenshot of the Administration menu to select System Settings.":::
-6. Mark the sensors that you want to update in the **Sensor Engine Configuration** section, and then select **Automatic Updates**.
+1. Select the sensors to update in the **Sensor Engine Configuration** section, and then select **Automatic Updates**.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensors-select.png" alt-text="Two sensors showing learning mode and automatic updates.":::
-7. Select **Save Changes**.
+1. Select **Save Changes**.
-8. On the **Sensors Version upgrade** pane, select :::image type="icon" source="media/how-to-manage-sensors-from-the-on-premises-management-console/plus-icon.png" border="false":::.
+1. On the sensor, select **System Settings**, and then select **Update**.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/display-files.png" alt-text="Sensor version upgrade screen to display files.":::
+ :::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the update pane.":::
9. An **Upload File** dialog box opens. Upload the file that you downloaded from the **Updates** page.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/upload-file.png" alt-text="Select the Browse button to upload your file.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/upload-file.png" alt-text="Select the Browse button to upload your file.":::
-10. During the update process, the update status of each sensor appears in the **Site Management** window.
+You can monitor the update status of each sensor in the **Site Management** window.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/progress.png" alt-text="Observe the progress of your update.":::
+
+### Update sensors from the on-premises management console
+
+You can view the update status of your sensors from the management console. If the update failed, you can reattempt to update the sensor from the on-premises management console (versions 2.3.5 and on).
+
+To update the sensor from on-premises management console:
+
+1. Sign in to the on-premises management console, and navigate to the **Sites Management** page.
+
+1. Locate any sensors that have **Failed** under the Update Progress column, and select the download button.
+
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/download-update-button.png" alt-text="Select the download icon to try to download and install the update for your sensor.":::
+
+You can monitor the update status of each sensor in the **Site Management** window.
++
+If you are unable to update the sensor, contact customer support for assistance.
## Update threat intelligence packages
To update the threat intelligence data:
1. Go to the Defender for IoT **Updates** page.
-2. Download and save the file.
+1. Download and save the file.
-3. Sign in to the management console.
+1. Sign in to the management console.
-4. On the side menu, select **System Settings**.
+1. On the side menu, select **System Settings**.
-5. Select the sensors that should receive the update in the **Sensor Engine Configuration** section.
+1. Select the sensors that should receive the update in the **Sensor Engine Configuration** section.
-6. In the **Select Threat Intelligence Data** section, select the plus sign (**+**).
+1. In the **Select Threat Intelligence Data** section, select the plus sign (**+**).
-7. Upload the package that you downloaded from the Defender for IoT **Updates** page.
+1. Upload the package that you downloaded from the Defender for IoT **Updates** page.
## Understand sensor disconnection events
To enable or disable engines for connected sensors:
1. In the console's left pane, select **System Settings**.
-2. In the **Sensor Engine Configuration** section, select **Enable** or **Disable** for the engines.
+1. In the **Sensor Engine Configuration** section, select **Enable** or **Disable** for the engines.
-3. Select **SAVE CHANGES**.
+1. Select **SAVE CHANGES**.
A red exclamation mark appears if there's a mismatch of enabled engines on one of your enterprise sensors. The engine might have been disabled directly from the sensor.
To back up sensors:
1. Select **Schedule Sensor Backup** from the **System Settings** window. Sensors that your on-premises management console manages appear in the **Sensor Backup Schedule** dialog box.
-2. Enable the **Collect Backups** toggle.
+1. Enable the **Collect Backups** toggle.
-3. Select a calendar interval, date, and time zone. The time format is based on a 24-hour clock. For example, enter 6:00 PM as **18:00**.
+1. Select a calendar interval, date, and time zone. The time format is based on a 24-hour clock. For example, enter 6:00 PM as **18:00**.
-4. In the **Backup Storage Allocation** field, enter the storage that you want to allocate for your backups. You're notified if you exceed the maximum space.
+1. In the **Backup Storage Allocation** field, enter the storage that you want to allocate for your backups. You're notified if you exceed the maximum space.
-5. In the **Retain Last** field, indicate the number of backups per sensor you want to retain. When the limit is exceeded, the oldest backup is deleted.
+1. In the **Retain Last** field, indicate the number of backups per sensor you want to retain. When the limit is exceeded, the oldest backup is deleted.
-6. Choose a backup location:
+1. Choose a backup location:
- To back up to the on-premises management console, disable the **Custom Path** toggle. The default location is `/var/cyberx/sensor-backups`. - To back up to an external server, enable the **Custom Path** toggle and enter a location. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and, _`.
-7. Select **Save**.
+1. Select **Save**.
To back up immediately:
To set up an SMB server so you can save a sensor backup to an external drive:
1. Create a shared folder in the external SMB server.
-2. Get the folder path, username, and password required to access the SMB server.
+1. Get the folder path, username, and password required to access the SMB server.
-3. In Defender for IoT, make a directory for the backups:
+1. In Defender for IoT, make a directory for the backups:
`sudo mkdir /<backup_folder_name_on_server>` `sudo chmod 777 /<backup_folder_name_on_server>/`
-4. Edit fstab:ΓÇ»
+1. Edit fstab:ΓÇ»
`sudo nano /etc/fstab` `add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifs rw,credentials=/etc/samba/user,vers=3.0,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0`
-5. Edit or create credentials to share. These are the credentials for the SMB server:
+1. Edit or create credentials to share. These are the credentials for the SMB server:
`sudo nano /etc/samba/user`
-6. Add:ΓÇ»
+1. Add:ΓÇ»
`username=<user name>` `password=<password>`
-7. Mount the directory:
+1. Mount the directory:
`sudo mount -a`
-8. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ»
+1. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ»
`sudo nano /var/cyberx/properties/backup.properties`
-9. Set `Backup.shared_location` to `<backup_folder_name_on_cyberx_server>`.
+1. Set `Backup.shared_location` to `<backup_folder_name_on_cyberx_server>`.
## See also
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-your-network.md
An overview of the industrial network diagram will allow you to define the prope
<Add your network diagram with marked serial connection>
-7. For QoS, the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
+7. For Quality of Service (QoS), the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
Business unit (BU): ________________
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Title: Troubleshoot the sensor and on-premises management console description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having. Previously updated : 03/14/2021 Last updated : 04/22/2021 # Troubleshoot the sensor and on-premises management console
To fix the configuration:
1. In the data-mining report, select :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/administrator-mode.png" border="false"::: to enter the administrator mode and delete the IP addresses of your ICS devices.
-### Tweak the sensor's quality of service
+### Tweak the sensor's Quality of Service (QoS)
To save your network resources, you can limit the interface bandwidth that the sensor uses for day-to-day procedures.
If an expected alert is not shown in the **Alerts** window, verify the following
- Verify that you did not exclude this alert by using the **Alert Exclusion** rules in the on-premises management console.
-### Tweak the quality of service
+### Tweak the Quality of Service (QoS)
To save your network resources, you can limit the number of alerts sent to external systems (such as emails or SIEM) in one sync operation between an appliance and the on-premises management console.
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
The extending interface cannot change any of the definitions of the parent inter
## Model code
-Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension *.json*. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
+Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension .json. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
### Possible schemas
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-convert.md
The sample is a .NET Core command-line application called **RdfToDtdlConverter**
You can get the sample here: [RdfToDtdlConverter](/samples/azure-samples/rdftodtdlconverter/digital-twins-model-conversion-samples/).
-To download the code to your machine, select the **Browse code** button underneath the title on the sample page, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a *.ZIP* file called *RdfToDtdlConverter-main.zip*. You can then unzip the file and explore the code.
+To download the code to your machine, select the **Browse code** button underneath the title on the sample page, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a .zip file called *RdfToDtdlConverter-main.zip*. You can then unzip the file and explore the code.
:::image type="content" source="media/concepts-ontologies-convert/download-repo-zip.png" alt-text="Screenshot of the RdfToDtdlConverter repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/concepts-ontologies-convert/download-repo-zip.png":::
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-extend.md
In this section, you'll see two examples:
Both examples can be implemented with new properties: a `drawingId` property that associates the 3D drawing with the digital twin and an "online" property that indicates whether the conference room is online or not.
-Typically, you don't want to modify the industry ontology directly because you'd like to be able to incorporate updates to it in your solution in the future (which would overwrite your additions). Instead, these kinds of additions can be made in your own interface hierarchy that extends from the DTDL-based RealEstateCore ontology. Each interface you create uses multiple interface inheritance to extend its parent RealEstateCore interface and its parent interface from your extended interface hierarchy. This approach enables you to make use of the industry ontology and your additions together.
+Typically, you don't want to modify the industry ontology directly because you want to be able to incorporate updates to it in your solution in the future (which would overwrite your additions). Instead, these kinds of additions can be made in your own interface hierarchy that extends from the DTDL-based RealEstateCore ontology. Each interface you create uses multiple interface inheritance to extend its parent RealEstateCore interface and its parent interface from your extended interface hierarchy. This approach enables you to make use of the industry ontology and your additions together.
To extend the industry ontology, you create your own interfaces that extend from the interfaces in the industry ontology and add the new capabilities to your extended interfaces. For each interface that you want to extend, you create a new interface. The extended interfaces are written in DTDL (see the DTDL for Extended Interfaces section later in this document).
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration.md
In the *Register an application* page that follows, fill in the requested values
* **Supported account types**: Select *Accounts in this organizational directory only (Default Directory only - Single tenant)* * **Redirect URI**: An *Azure AD application reply URL* for the Azure AD application. Add a *Public client/native (mobile & desktop)* URI for `http://localhost`.
-When you are finished, hit the *Register* button.
+When you are finished, select the *Register* button.
:::image type="content" source="media/how-to-create-app-registration/register-an-application.png" alt-text="View of the 'Register an application' page with the described values filled in":::
Take note of the _**Application (client) ID**_ and _**Directory (tenant) ID**_ s
Next, configure the app registration you've created with baseline permissions to the Azure Digital Twins APIs.
-From the portal page for your app registration, select *API permissions* from the menu. On the following permissions page, hit the *+ Add a permission* button.
+From the portal page for your app registration, select *API permissions* from the menu. On the following permissions page, select the *+ Add a permission* button.
:::image type="content" source="media/how-to-create-app-registration/add-permission.png" alt-text="View of the app registration in the Azure portal, highlighting the 'API permissions' menu option and '+ Add a permission' button":::
-In the *Request API permissions* page that follows, switch to the *APIs my organization uses* tab and search for *azure digital twins*. Select _**Azure Digital Twins**_ from the search results to proceed with assigning permissions for the Azure Digital Twins APIs.
+In the *Request API permissions* page that follows, switch to the *APIs my organization uses* tab and search for *Azure digital twins*. Select _**Azure Digital Twins**_ from the search results to proceed with assigning permissions for the Azure Digital Twins APIs.
:::image type="content" source="media/how-to-create-app-registration/request-api-permissions-1.png" alt-text="View of the 'Request API Permissions' page search result showing Azure Digital Twins, with an Application (client) ID of 0b07f429-9f4b-4714-9392-cc5e8e80c8b0.":::
Next, you'll select which permissions to grant for these APIs. Expand the **Read
:::image type="content" source="media/how-to-create-app-registration/request-api-permissions-2.png" alt-text="View of the 'Request API Permissions' page selecting 'Read.Write' permissions for the Azure Digital Twins APIs":::
-Hit *Add permissions* when finished.
+Select *Add permissions* when finished.
### Verify success
digital-twins How To Enable Managed Identities Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-managed-identities-portal.md
In this section, you'll add a system-managed identity to an Azure Digital Twins
1. On this page, select the **On** option to turn on this feature.
-1. Hit the **Save** button, and **Yes** to confirm.
+1. Select the **Save** button, and **Yes** to confirm.
:::image type="content" source="media/how-to-enable-managed-identities/identity-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Identity (preview) page for an Azure Digital Twins instance. There's a highlight around the page name in the Azure Digital Twins instance menu, the On option for Status, the Save button, and the Yes confirmation button.":::
digital-twins How To Enable Private Link Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-private-link-portal.md
This will open a page to enter the details of a new private endpoint.
1. Fill in selections for your **Subscription** and **Resource group**. Set the **Location** to the same location as the VNet you'll be using. Choose a **Name** for the endpoint, and for **Target sub-resources** select *API*.
-1. Next, select the **Virtual network** and **Subnet** you'd like to use to deploy the endpoint.
+1. Next, select the **Virtual network** and **Subnet** you want to use to deploy the endpoint.
1. Lastly, select whether to **Integrate with private DNS zone**. You can use the default of **Yes** or, for help with this option, you can follow the link in the portal to [learn more about private DNS integration](../private-link/private-endpoint-overview.md#dns-configuration).
-After filling out the configuration options, Hit **OK** to finish.
+After filling out the configuration options, select **OK** to finish.
This will return you to the **Networking** tab of the Azure Digital Twins instance setup, where your new endpoint should be visible under **Private endpoint connections.
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
You'll be attaching Azure SignalR Service to Azure Digital Twins through the pat
First, download the required sample apps. You will need both of the following: * [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* that holds two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [Tutorial: Connect an end-to-end solution](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
- - If you haven't already downloaded the sample as part of the tutorial in [Prerequisites](#prerequisites), [navigate to the sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) and select the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a *.ZIP* by selecting the *Code* button and *Download ZIP*.
+ - If you haven't already downloaded the sample as part of the tutorial in [Prerequisites](#prerequisites), [navigate to the sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) and select the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the *Code* button and *Download ZIP*.
:::image type="content" source="media/includes/download-repo-zip.png" alt-text="View of the digital-twins-samples repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/includes/download-repo-zip.png":::
In the [Azure portal](https://portal.azure.com/), navigate to your event grid to
On the *Create Event Subscription* page, fill in the fields as follows (fields filled by default are not mentioned): * *EVENT SUBSCRIPTION DETAILS* > **Name**: Give a name to your event subscription. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options.
-* *ENDPOINT DETAILS* > **Endpoint**: Hit the *Select an endpoint* link. This will open a *Select Azure Function* window:
+* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link. This will open a *Select Azure Function* window:
- Fill in your **Subscription**, **Resource group**, **Function app** and **Function** (*broadcast*). Some of these may auto-populate after selecting the subscription.
- - Hit **Confirm Selection**.
+ - Select **Confirm Selection**.
:::image type="content" source="media/how-to-integrate-azure-signalr/create-event-subscription.png" alt-text="Azure portal view of creating an event subscription. The fields above are filled in, and the 'Confirm Selection' and 'Create' buttons are highlighted.":::
-Back on the *Create Event Subscription* page, hit **Create**.
+Back on the *Create Event Subscription* page, select **Create**.
At this point, you should see two event subscriptions in the *Event Grid Topic* page.
Next, you'll configure the sample client web app. Start by gathering the **HTTP
:::image type="content" source="media/how-to-integrate-azure-signalr/functions-negotiate.png" alt-text="Azure portal view of the function app, with 'Functions' highlighted in the menu. The list of functions is shown on the page, and the 'negotiate' function is also highlighted.":::
-1. Hit *Get function URL* and copy the value **up through _/api_ (don't include the last _/negotiate?_)**. You'll use this in the next step.
+1. Select *Get function URL* and copy the value **up through _/api_ (don't include the last _/negotiate?_)**. You'll use this in the next step.
:::image type="content" source="media/how-to-integrate-azure-signalr/get-function-url.png" alt-text="Azure portal view of the 'negotiate' function. The 'Get function URL' button is highlighted, and the portion of the URL from the beginning through '/api'":::
Next, you'll configure the sample client web app. Start by gathering the **HTTP
Next, set permissions in your function app in the Azure portal: 1. In the Azure portal's [Function apps](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp) page, select your function app instance.
-1. Scroll down in the instance menu and select *CORS*. On the CORS page, add `http://localhost:3000` as an allowed origin by entering it into the empty box. Check the box for *Enable Access-Control-Allow-Credentials* and hit *Save*.
+1. Scroll down in the instance menu and select *CORS*. On the CORS page, add `http://localhost:3000` as an allowed origin by entering it into the empty box. Check the box for *Enable Access-Control-Allow-Credentials* and select *Save*.
:::image type="content" source="media/how-to-integrate-azure-signalr/cors-setting-azure-function.png" alt-text="CORS Setting in Azure Function":::
digital-twins How To Integrate Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-logic-apps.md
You will need the **_Twin ID_** of a twin in your instance that you've created.
You will also need to create a **_Client secret_** for your Azure AD app registration. To do this, navigate to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) page in the Azure portal (you can use this link or look for it in the portal search bar). Select your registration that you created in the previous section from the list, in order to open its details.
-Hit *Certificates and secrets* from the registration's menu, and select *+ New client secret*.
+Select *Certificates and secrets* from the registration's menu, and then select *+ New client secret*.
:::image type="content" source="media/how-to-integrate-logic-apps/client-secret.png" alt-text="Portal view of an Azure AD app registration. There's a highlight around 'Certificates and secrets' in the resource menu, and a highlight on the page around 'New client secret'":::
-Enter whatever values you would like for *Description* and *Expires*, and hit *Add*.
+Enter whatever values you want for *Description* and *Expires*, and select *Add*.
:::image type="content" source="media/how-to-integrate-logic-apps/add-client-secret.png" alt-text="Add client secret":::
Now, verify that the client secret is visible on the _Certificates & secrets_ pa
Now, you're ready to create a [custom Logic Apps connector](../logic-apps/custom-connector-overview.md) for the Azure Digital Twins APIs. After doing this, you'll be able to hook up Azure Digital Twins when creating a logic app in the next section.
-Navigate to the [Logic Apps Custom Connector](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Web%2FcustomApis) page in the Azure portal (you can use this link or search for it in the portal search bar). Hit *+ Add*.
+Navigate to the [Logic Apps Custom Connector](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Web%2FcustomApis) page in the Azure portal (you can use this link or search for it in the portal search bar). Select *+ Add*.
:::image type="content" source="media/how-to-integrate-logic-apps/logic-apps-custom-connector.png" alt-text="The 'Logic Apps Custom Connector' page in the Azure portal. Highlight around the 'Add' button":::
-In the *Create Logic Apps Custom Connector* page that follows, select your subscription and resource group, and a name and deployment location for your new connector. Hit *Review + create*.
+In the *Create Logic Apps Custom Connector* page that follows, select your subscription and resource group, and a name and deployment location for your new connector. Select *Review + create*.
:::image type="content" source="media/how-to-integrate-logic-apps/create-logic-apps-custom-connector.png" alt-text="The 'Create Logic Apps Custom Connector' page in the Azure portal.":::
-This will take you to the *Review + create* tab, where you can hit *Create* at the bottom to create your resource.
+This will take you to the *Review + create* tab, where you can select *Create* at the bottom to create your resource.
:::image type="content" source="media/how-to-integrate-logic-apps/review-logic-apps-custom-connector.png" alt-text="The 'Review + create' tab of the 'Review Logic Apps Custom Connector' page in the Azure portal. Highlight around the 'Create' button":::
-You'll be taken to the deployment page for the connector. When it is finished deploying, hit the *Go to resource* button to view the connector's details in the portal.
+You'll be taken to the deployment page for the connector. When it is finished deploying, select the *Go to resource* button to view the connector's details in the portal.
### Configure connector for Azure Digital Twins Next, you'll configure the connector you've created to reach Azure Digital Twins.
-First, download a custom Azure Digital Twins Swagger that has been modified to work with Logic Apps. Download the [Azure Digital Twins custom Swaggers (Logic Apps connector) sample](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) by hitting the *Download ZIP* button. Navigate to the downloaded *Azure_Digital_Twins_custom_Swaggers__Logic_Apps_connector_.zip* folder and unzip it.
+First, download a custom Azure Digital Twins Swagger that has been modified to work with Logic Apps. Download the [Azure Digital Twins custom Swaggers (Logic Apps connector) sample](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) by selecting the *Download ZIP* button. Navigate to the downloaded *Azure_Digital_Twins_custom_Swaggers__Logic_Apps_connector_.zip* folder and unzip it.
The custom Swagger for this tutorial is located in the ***Azure_Digital_Twins_custom_Swaggers__Logic_Apps_connector_\LogicApps*** folder. This folder contains subfolders called *stable* and *preview*, both of which hold different versions of the Swagger organized by date. The folder with the most recent date will contain the latest copy of the Swagger. Whichever version you select, the Swagger file is named _**digitaltwins.json**_. > [!NOTE] > Unless you're working with a preview feature, it's generally recommended to use the most recent *stable* version of the Swagger. However, earlier versions and preview versions of the Swagger are also still supported.
-Next, go to your connector's Overview page in the [Azure portal](https://portal.azure.com) and hit *Edit*.
+Next, go to your connector's Overview page in the [Azure portal](https://portal.azure.com) and select *Edit*.
:::image type="content" source="media/how-to-integrate-logic-apps/edit-connector.png" alt-text="The 'Overview 'page for the connector created in the previous step. Highlight around the 'Edit' button":::
In the *Edit Logic Apps Custom Connector* page that follows, configure this info
* **Custom connectors** - API Endpoint: REST (leave default) - Import mode: OpenAPI file (leave default)
- - File: This will be the custom Swagger file you downloaded earlier. Hit *Import*, locate the file on your machine (*Azure_Digital_Twins_custom_Swaggers__Logic_Apps_connector_\LogicApps\...\digitaltwins.json*), and hit *Open*.
+ - File: This will be the custom Swagger file you downloaded earlier. Select *Import*, locate the file on your machine (*Azure_Digital_Twins_custom_Swaggers__Logic_Apps_connector_\LogicApps\...\digitaltwins.json*), and select *Open*.
* **General information** - Icon: Upload an icon that you like - Icon background color: Enter hexadecimal code in the format '#xxxxxx' for your color.
- - Description: Fill whatever values you would like.
+ - Description: Fill whatever values you want.
- Scheme: HTTPS (leave default) - Host: The *host name* of your Azure Digital Twins instance. - Base URL: / (leave default)
-Then, hit the *Security* button at the bottom of the window to continue to the next configuration step.
+Then, select the *Security* button at the bottom of the window to continue to the next configuration step.
:::image type="content" source="media/how-to-integrate-logic-apps/configure-next.png" alt-text="Screenshot of the bottom of the 'Edit Logic Apps Custom Connector' page. Highlight around button to continue to Security":::
-In the Security step, hit *Edit* and configure this information:
+In the Security step, select *Edit* and configure this information:
* **Authentication type**: OAuth 2.0 * **OAuth 2.0**: - Identity provider: Azure Active Directory
In the Security step, hit *Edit* and configure this information:
- Scope: Directory.AccessAsUser.All - Redirect URL: (leave default for now)
-Note that the Redirect URL field says *Save the custom connector to generate the redirect URL*. Do this now by hitting *Update connector* across the top of the pane to confirm your connector settings.
+Note that the Redirect URL field says *Save the custom connector to generate the redirect URL*. Do this now by selecting *Update connector* across the top of the pane to confirm your connector settings.
:::image type="content" source="media/how-to-integrate-logic-apps/update-connector.png" alt-text="Screenshot of the top of the 'Edit Logic Apps Custom Connector' page. Highlight around 'Update connector' button":::
Return to the Redirect URL field and copy the value that has been generated. You
This is all the information that is required to create your connector (no need to continue past Security to the Definition step). You can close the *Edit Logic Apps Custom Connector* pane. >[!NOTE]
->Back on your connector's Overview page where you originally hit *Edit*, note that hitting *Edit* again will restart the entire process of entering your configuration choices. It will not populate your values from the last time you went through it, so if you want to save an updated configuration with any changed values, you must re-enter all the other values as well to avoid their being overwritten by the defaults.
+>Back on your connector's Overview page where you originally selected *Edit*, note that selecting *Edit* again will restart the entire process of entering your configuration choices. It will not populate your values from the last time you went through it, so if you want to save an updated configuration with any changed values, you must re-enter all the other values as well to avoid their being overwritten by the defaults.
### Grant connector permissions in the Azure AD app
Under *Authentication* from the registration's menu, add a URI.
:::image type="content" source="media/how-to-integrate-logic-apps/add-uri.png" alt-text="The Authentication page for the app registration in the Azure portal. 'Authentication' in the menu is highlighted, and on the page, the 'Add a URI' button is highlighted.":::
-Enter the custom connector's *Redirect URL* into the new field, and hit the *Save* icon.
+Enter the custom connector's *Redirect URL* into the new field, and select the *Save* icon.
:::image type="content" source="media/how-to-integrate-logic-apps/save-uri.png" alt-text="The Authentication page for the app registration in the Azure portal. The new redirect URL is highlighted, and the 'Save' button for the page.":::
You are now done setting up a custom connector that can access the Azure Digital
Next, you'll create a logic app that will use your new connector to automate Azure Digital Twins updates.
-In the [Azure portal](https://portal.azure.com), search for *Logic apps* in the portal search bar. Selecting it should take you to the *Logic apps* page. Hit the *Create logic app* button to create a new logic app.
+In the [Azure portal](https://portal.azure.com), search for *Logic apps* in the portal search bar. Selecting it should take you to the *Logic apps* page. Select the *Create logic app* button to create a new logic app.
In the *Logic App* page that follows, enter your subscription and resource group. Also, choose a name for your logic app and select the deployment location.
-Hit the _Review + create_ button.
+Select the _Review + create_ button.
-This will take you to the *Review + create* tab, where you can review your details and hit *Create* at the bottom to create your resource.
+This will take you to the *Review + create* tab, where you can review your details and select *Create* at the bottom to create your resource.
-You'll be taken to the deployment page for the logic app. When it is finished deploying, hit the *Go to resource* button to continue to the *Logic Apps Designer*, where you will fill in the logic of the workflow.
+You'll be taken to the deployment page for the logic app. When it is finished deploying, select the *Go to resource* button to continue to the *Logic Apps Designer*, where you will fill in the logic of the workflow.
### Design workflow
In the *Logic Apps Designer*, under *Start with a common trigger*, select _**Rec
In the *Logic Apps Designer* page that follows, change the **Recurrence** Frequency to *Second*, so that the event is triggered every 3 seconds. This will make it easy to see the results later without having to wait very long.
-Hit *+ New step*.
+Select *+ New step*.
This will open a *Choose an action* box. Switch to the *Custom* tab. You should see your custom connector from earlier in the top box.
Select it to display the list of APIs contained in that connector. Use the searc
You may be asked to sign in with your Azure credentials to connect to the connector. If you get a *Permissions requested* dialogue, follow the prompts to grant consent for your app and accept. In the new *DigitalTwinsAdd* box, fill the fields as follows:
-* _id_: Fill the *Twin ID* of the digital twin in your instance that you'd like the Logic App to update.
+* _id_: Fill the *Twin ID* of the digital twin in your instance that you want the Logic App to update.
* _twin_: This field is where you'll enter the body that the chosen API request requires. For *DigitalTwinsUpdate*, this body is in the form of JSON Patch code. For more about structuring a JSON Patch to update your twin, see the [Update a digital twin](how-to-manage-twin.md#update-a-digital-twin) section of *How-to: Manage digital twins*. * _api-version_: The latest API version. Currently, this value is *2020-10-31*.
-Hit *Save* in the Logic Apps Designer.
+Select *Save* in the Logic Apps Designer.
You can choose other operations by selecting _+ New step_ on the same window.
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
For instructions, see [How-to: Set up an Azure Digital Twins instance and authen
You'll need to update twin's information a few times to see that data tracked in Time Series Insights. For instructions, see the [Add a model and twin](how-to-ingest-iot-hub-data.md#add-a-model-and-twin) section of the *How to: Ingest IoT hub* article. > [!TIP]
-> In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you'd like to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [How to: Ingest IoT Hub data](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works.
+> In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you want to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [How to: Ingest IoT Hub data](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works.
> > Later, look for another TIP to show you where to start running the device simulator and have your Azure functions update the twins automatically, instead of sending manual digital twin update commands.
az dt twin update -n <your-azure-digital-twins-instance-name> --twin-id thermost
**Repeat the command at least 4 more times with different temperature values**, to create several data points that can be observed later in the Time Series Insights environment. > [!TIP]
-> If you'd like to complete this article with live simulated data instead of manually updating the digital twin values, first make sure you've completed the TIP from the [Prerequisites](#prerequisites) section to set up an Azure function that updates twins from a simulated device.
+> If you want to complete this article with live simulated data instead of manually updating the digital twin values, first make sure you've completed the TIP from the [Prerequisites](#prerequisites) section to set up an Azure function that updates twins from a simulated device.
After that, you can run the device now to start sending simulated data and updating your digital twin through that data flow. ## Visualize your data in Time Series Insights
Now, data should be flowing into your Time Series Insights instance, ready to be
:::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Screenshot of the Azure portal to select the Time Series Insights explorer URL in the overview tab of your Time Series Insights environment." lightbox="media/how-to-integrate-time-series-insights/view-environment.png":::
-2. In the explorer, you will see the twins in the Azure Digital Twins instance shown on the left. Select the *thermostat67* twin, choose the property *Temperature*, and hit **Add**.
+2. In the explorer, you will see the twins in the Azure Digital Twins instance shown on the left. Select the *thermostat67* twin, choose the property *Temperature*, and select **Add**.
- :::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Screenshot of the Time Series Insights explorer to select thermostat67, select the property temperature, and hit add." lightbox="media/how-to-integrate-time-series-insights/add-data.png":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Screenshot of the Time Series Insights explorer to select thermostat67, select the property temperature, and select add." lightbox="media/how-to-integrate-time-series-insights/add-data.png":::
3. You should now see the initial temperature readings from your thermostat, as shown below.
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
Relationships are updated using the `UpdateRelationship` method.
>[!NOTE] >This method is for updating the **properties** of a relationship. If you need to change the source twin or target twin of the relationship, you'll need to [delete the relationship](#delete-relationships) and [re-create one](#create-relationships) using the new twins.
-The required parameters for the client call are the ID of the source twin (the twin where the relationship originates), the ID of the relationship to update, and a [JSON Patch](http://jsonpatch.com/) document containing the properties and new values you'd like to update.
+The required parameters for the client call are the ID of the source twin (the twin where the relationship originates), the ID of the relationship to update, and a [JSON Patch](http://jsonpatch.com/) document containing the properties and new values you want to update.
Here is sample code showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
The snippet uses the [Room.json](https://github.com/Azure-Samples/digital-twins-
Before you run the sample, do the following: 1. Download the model files, place them in your project, and replace the `<path-to>` placeholders in the code below to tell your program where to find them.
-2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's hostname.
+2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's host name.
3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client), the second provides tools to help with authentication against Azure. ```cmd/sh
Consider the following data table, describing a set of digital twins 
One way to get this data into Azure Digital Twins is to convert the table to a CSV file and write code to interpret the file into commands to create twins and relationships. The following code sample illustrates reading the data from the CSV file and creating a twin graph in Azure Digital Twins.
-In the code below, the CSV file is called *data.csv*, and there is a placeholder representing the **hostname** of your Azure Digital Twins instance. The sample also makes use of several packages that you can add to your project to help with this process.
+In the code below, the CSV file is called *data.csv*, and there is a placeholder representing the **host name** of your Azure Digital Twins instance. The sample also makes use of several packages that you can add to your project to help with this process.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graphFromCSV.cs":::
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
Management operations include upload, validation, retrieval, and deletion of mod
## Create models
-Models for Azure Digital Twins are written in DTDL, and saved as *.json* files. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/), which provides syntax validation and other features to facilitate writing DTDL documents.
+Models for Azure Digital Twins are written in DTDL, and saved as .json files. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/), which provides syntax validation and other features to facilitate writing DTDL documents.
Consider an example in which a hospital wants to digitally represent their rooms. Each room contains a smart soap dispenser for monitoring hand-washing, and sensors to monitor traffic through the room.
Instead, if you want to make changes to a modelΓÇösuch as updating `displayName`
### Model versioning
-To create a new version of an existing model, start with the DTDL of the original model. Update, add, or remove the fields you would like to change.
+To create a new version of an existing model, start with the DTDL of the original model. Update, add, or remove the fields you want to change.
Next, mark this as a newer version of the model by updating the `id` field of the model. The last section of the model ID, after the `;`, represents the model number. To indicate that this is now a more-updated version of this model, increment the number at the end of the `id` value to any number greater than the current version number.
digital-twins How To Manage Routes Apis Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
Follow the steps below to set up these storage resources in your Azure account,
:::image type="content" source="./media/how-to-manage-routes-apis-cli/generate-sas-token-1.png" alt-text="Storage account page in the Azure portal" lightbox="./media/how-to-manage-routes-apis-cli/generate-sas-token-1.png":::
-1. On the *Shared access signature page*, under *Allowed services* and *Allowed resource types*, select whatever settings you'd like. You'll need to select at least one box in each category. Under *Allowed permissions*, choose **Write** (you can also select other permissions if you want).
+1. On the *Shared access signature page*, under *Allowed services* and *Allowed resource types*, select whatever settings you want. You'll need to select at least one box in each category. Under *Allowed permissions*, choose **Write** (you can also select other permissions if you want).
1. Set whatever values you want for the remaining settings. 1. When you're finished, select the _Generate SAS and connection string_ button to generate the SAS token.
You can restrict the events being sent by adding a **filter** for an endpoint to
> > For telemetry filters, this means that the casing needs to match the casing in the telemetry sent by the device, not necessarily the casing defined in the twin's model.
-To add a filter, you can use a PUT request to *https://{Your-azure-digital-twins-hostname}/eventRoutes/{event-route-name}?api-version=2020-10-31* with the following body:
+To add a filter, you can use a PUT request to *https://{Your-azure-digital-twins-host-name}/eventRoutes/{event-route-name}?api-version=2020-10-31* with the following body:
:::code language="json" source="~/digital-twins-docs-samples/api-requests/filter.json":::
digital-twins How To Manage Routes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-portal.md
From the instance menu, select _Event routes_. Then from the *Event routes* page
On the *Create an event route* page that opens up, choose at minimum: * A name for your route in the _Name_ field
-* The _Endpoint_ you would like to use to create the route
+* The _Endpoint_ you want to use to create the route
For the route to be enabled, you must also **Add an event route filter** of at least `true`. (Leaving the default value of `false` will create the route, but no events will be sent to it.) To do this, toggle the switch for the _Advanced editor_ to enable it, and write `true` in the *Filter* box. :::image type="content" source="media/how-to-manage-routes-portal/create-event-route-no-filter.png" alt-text="Screenshot of creating event route for your instance." lightbox="media/how-to-manage-routes-portal/create-event-route-no-filter.png":::
-When finished, hit the _Save_ button to create your event route.
+When finished, select the _Save_ button to create your event route.
## Filter events
You can either select from some basic common filter options, or use the advanced
#### Use the basic filters
-To use the basic filters, expand the _Event types_ option and select the checkboxes corresponding to the events you'd like to send to your endpoint.
+To use the basic filters, expand the _Event types_ option and select the checkboxes corresponding to the events you want to send to your endpoint.
:::row::: :::column:::
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
To update properties of a digital twin, you write the information you want to re
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="UpdateTwinCall":::
-A patch call can update as many properties on a single twin as you'd like (even all of them). If you need to update properties across multiple twins, you'll need a separate update call for each twin.
+A patch call can update as many properties on a single twin as you want (even all of them). If you need to update properties across multiple twins, you'll need a separate update call for each twin.
> [!TIP] > After creating or updating a twin, there may be a latency of up to 10 seconds before the changes will be reflected in [queries](how-to-query-graph.md). The `GetDigitalTwin` API (described [earlier in this article](#get-data-for-a-digital-twin)) does not experience this delay, so use the API call instead of querying to see your newly-updated twins if you need an instant response.
The snippet uses the [Room.json](https://github.com/Azure-Samples/digital-twins-
Before you run the sample, do the following: 1. Download the model file, place it in your project, and replace the `<path-to>` placeholder in the code below to tell your program where to find it.
-2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's hostname.
+2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's host name.
3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client), the second provides tools to help with authentication against Azure. ```cmd/sh
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-move-regions.md
If the sample isn't able to handle the size of your graph, you can export and im
To proceed with Azure Digital Twins Explorer, first download the sample application code and set it up to run on your machine.
-To get the sample, go to [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Browse code** button underneath the title, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a *.ZIP* file to your machine.
+To get the sample, go to [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Browse code** button underneath the title, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a .zip file to your machine.
:::image type="content" source="media/how-to-move-regions/download-repo-zip.png" alt-text="Screenshot of the digital-twins-explorer repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/how-to-move-regions/download-repo-zip.png":::
digital-twins How To Parse Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-parse-models.md
After you have built a self-contained package and added the executable to your p
DTDLValidator ```
-With the default options, the sample will search for `*.json` files in the current directory and all subdirectories. You can also add the following option to have the sample search in the indicated directory and all subdirectories for files with the extension *.dtdl*:
+With the default options, the sample will search for .json files in the current directory and all subdirectories. You can also add the following option to have the sample search in the indicated directory and all subdirectories for files with the extension .dtdl:
```cmd/sh DTDLValidator -d C:\Work\DTDL -e dtdl
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
Before you can set up the provisioning, you'll need to set up the following:
* an **IoT hub**. For instructions, see the *Create an IoT Hub* section of this [IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md). * an [Azure function](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [How to: Ingest IoT hub data](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
-This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the sample, which you can download as a *.ZIP* file by selecting the **Code** button and **Download ZIP**.
+This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the sample, which you can download as a .zip file by selecting the **Code** button and **Download ZIP**.
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/download-repo-zip.png" alt-text="Screenshot of the digital-twins-iothub-integration repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/how-to-provision-using-device-provisioning-service/download-repo-zip.png":::
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
This version of this article goes through these steps manually, one by one, usin
* To run through an automated setup using a deployment script sample, see the scripted version of this article: [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md). [!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)] [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
You now have an Azure Digital Twins instance ready to go. Next, you'll give the
[!INCLUDE [digital-twins-setup-role-assignment.md](../../includes/digital-twins-setup-role-assignment.md)]
+### Prerequisites: Permission requirements
++
+### Assign the role
+
+To give a user permissions to manage an Azure Digital Twins instance, you must assign them the **Azure Digital Twins Data Owner** role within the instance.
+ Use the following command to assign the role (must be run by a user with [sufficient permissions](#prerequisites-permission-requirements) in the Azure subscription). The command requires you to pass in the *user principal name* on the Azure AD account for the user that should be assigned the role. In most cases, this will match the user's email on the Azure AD account. ```azurecli-interactive
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
This version of this article goes through these steps manually, one by one, usin
* To run through an automated setup using a deployment script sample, see the scripted version of this article: [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md). [!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)] ## Create the Azure Digital Twins instance
On the following **Create Resource** page, fill in the values given below:
* **Location**: An Azure Digital Twins-enabled region for the deployment. For more details on regional support, visit [Azure products available by region (Azure Digital Twins)](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins). * **Resource name**: A name for your Azure Digital Twins instance. If your subscription has another Azure Digital Twins instance in the region that's already using the specified name, you'll be asked to pick a different name.
+* **Grant access to resource**: Checking the box in this section will give your Azure account permission to access and manage data in the instance. If you're the one that will be managing the instance, you should check this box now. If it's greyed out because you don't have permission in the subscription, you can continue creating the resource and have someone with the required permissions grant you the role later. For more information about this role and assigning roles to your instance, see the next section, [Set up user access permissions](#set-up-user-access-permissions).
When you're finished, you can select **Review + create** if you don't want to configure any more settings for your instance. This will take you to a summary page, where you can review the instance details you've entered and finish with **Create**.
You now have an Azure Digital Twins instance ready to go. Next, you'll give the
[!INCLUDE [digital-twins-setup-role-assignment.md](../../includes/digital-twins-setup-role-assignment.md)]
+There are two ways to create a role assignment for a user in Azure Digital Twins:
+* [During Azure Digital Twins instance creation](#assign-the-role-during-instance-creation)
+* [Using Azure Identity Management (IAM)](#assign-the-role-using-azure-identity-management-iam)
+
+They both require the same permissions.
+
+### Prerequisites: Permission requirements
++
+### Assign the role during instance creation
+
+While creating your Azure Digital Twins resource through the process described [earlier in this article](#create-the-azure-digital-twins-instance), select the **Assign Azure Digital Twins Data Owner Role** under **Grant access to resource**. This will grant yourself full access to the data plane APIs.
++
+If you don't have permission to assign a role to an identity, the box will appear greyed out.
++
+In that case, you can still continue to successfully create the Azure Digital Twins resource, but someone with the appropriate permissions will need to assign this role to you or the person who will be managing the instance's data.
+
+### Assign the role using Azure Identity Management (IAM)
+
+You can also assign the **Azure Digital Twins Data Owner** role using the access control options in Azure Identity Management (IAM).
+ First, open the page for your Azure Digital Twins instance in the Azure portal. From the instance's menu, select *Access control (IAM)*. Select the **+ Add** button to add a new role assignment. :::image type="content" source="media/how-to-set-up-instance/portal/add-role-assignment-1.png" alt-text="Selecting to add a role assignment from the 'Access control (IAM)' page"::: On the following *Add role assignment* page, fill in the values (must be completed by a user with [sufficient permissions](#prerequisites-permission-requirements) in the Azure subscription):
-* **Role**: Select *Azure Digital Twins Data Owner* from the dropdown menu
+* **Role**: Select **Azure Digital Twins Data Owner** from the dropdown menu
* **Assign access to**: Use *User, group or service principal* * **Select**: Search for the name or email address of the user to assign. When you select the result, the user will show up in a *Selected members* section.
On the following *Add role assignment* page, fill in the values (must be complet
:::column-end::: :::row-end:::
-When you're finished entering the details, hit the *Save* button.
+When you're finished entering the details, select the *Save* button.
### Verify success
digital-twins How To Set Up Instance Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-powershell.md
This version of this article goes through these steps manually, one by one, usin
* To run through an automated setup using a deployment script sample, see the scripted version of this article: [How-to: Set up an instance and authentication (scripted)](how-to-set-up-instance-scripted.md). [!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)] ## Prepare your environment
user permissions to manage it.
[!INCLUDE [digital-twins-setup-role-assignment.md](../../includes/digital-twins-setup-role-assignment.md)]
+### Prerequisites: Permission requirements
+
+### Assign the role
+
+To give a user permissions to manage an Azure Digital Twins instance, you must assign them the _**Azure Digital Twins Data Owner**_ role within the instance.
+ First, determine the **ObjectId** for the Azure AD account of the user that should be assigned the role. You can find this value using the [Get-AzAdUser](/powershell/module/az.resources/get-azaduser) cmdlet, by passing in the user principal name on the Azure AD account to retrieve their ObjectId (and other user information). In most cases, the user principal name will match the user's email on the Azure AD account. ```azurepowershell-interactive
digital-twins How To Set Up Instance Scripted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-scripted.md
This version of this article completes these steps by running an [automated depl
* To view the manual steps according to the Azure portal, see the portal version of this article: [How-to: Set up an instance and authentication (portal)](how-to-set-up-instance-portal.md). [!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]+
+## Prerequisites: Permission requirements
[!INCLUDE [digital-twins-setup-permissions.md](../../includes/digital-twins-setup-permissions.md)] ## Prerequisites: Download the script
-The sample script is written in PowerShell. It is part of the [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/), which you can download to your machine by navigating to that sample link and selecting the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a *.ZIP* by selecting the *Code* button and *Download ZIP*.
+The sample script is written in PowerShell. It is part of the [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/), which you can download to your machine by navigating to that sample link and selecting the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the *Code* button and *Download ZIP*.
:::image type="content" source="media/includes/download-repo-zip.png" alt-text="View of the digital-twins-samples repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/includes/download-repo-zip.png":::
-This will download a *.ZIP* folder to your machine as **digital-twins-samples-master.zip**. Navigate to the folder on your machine and unzip it to extract the files.
+This will download a .zip folder to your machine as **digital-twins-samples-master.zip**. Navigate to the folder on your machine and unzip it to extract the files.
In the unzipped folder, the deployment script is located at _digital-twins-samples-master > scripts > **deploy.ps1**_.
Here are the steps to run the deployment script in Cloud Shell.
:::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-upload.png" alt-text="Cloud Shell window showing selection of the Upload icon":::
- Navigate to the _**deploy.ps1**_ file on your machine (in _digital-twins-samples-master > scripts > **deploy.ps1**_) and hit "Open." This will upload the file to Cloud Shell so that you can run it in the Cloud Shell window.
+ Navigate to the _**deploy.ps1**_ file on your machine (in _digital-twins-samples-master > scripts > **deploy.ps1**_) and select "Open." This will upload the file to Cloud Shell so that you can run it in the Cloud Shell window.
4. Run the script by sending the `./deploy.ps1` command in the Cloud Shell window. You can copy the command below (recall that to paste into Cloud Shell, you can use **Ctrl+Shift+V** on Windows and Linux, or **Cmd+Shift+V** on macOS. You can also use the right-click menu).
Here are the steps to run the deployment script in Cloud Shell.
As the script runs through the automated setup steps, you will be asked to pass in the following values: * For the instance: the *subscription ID* of your Azure subscription to use
- * For the instance: a *location* where you'd like to deploy the instance. To see what regions support Azure Digital Twins, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
+ * For the instance: a *location* where you want to deploy the instance. To see what regions support Azure Digital Twins, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
* For the instance: a *resource group* name. You can use an existing resource group, or enter a new name of one to create. * For the instance: a *name* for your Azure Digital Twins instance. If your subscription has another Azure Digital Twins instance in the region that's already using the specified name, you'll be asked to pick a different name.
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
Here's how to download your chosen collection to your machine so that you can im
1. Select the **Raw** button to open the raw text of the file. :::image type="content" source="media/how-to-use-postman/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman/swagger-raw.png"::: 1. Copy the text from the window, and paste it into a new file on your machine.
-1. Save the file with a *.json* extension (the file name can be whatever you want, as long as you can remember it to find the file later).
+1. Save the file with a .json extension (the file name can be whatever you want, as long as you can remember it to find the file later).
### Import the collection
To make a Postman request to one of the Azure Digital Twins APIs, you'll need th
To proceed with an example query, this article will use the Query API (and its [reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)) to query for all the digital twins in an instance.
-1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST `https://digitaltwins-hostname/query?api-version=2020-10-31`*.
+1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST `https://digitaltwins-host-name/query?api-version=2020-10-31`*.
1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. This is where you will use your instance's **host name** from the [Prerequisites](#prerequisites) section. :::image type="content" source="media/how-to-use-postman/postman-request-url.png" alt-text="Screenshot of the new request's details in Postman. The query URL from the reference documentation has been filled into the request URL box." lightbox="media/how-to-use-postman/postman-request-url.png":::
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
You'll need an Azure subscription to complete this quickstart. If you don't have
You'll also need **Node.js** on your machine. To get the latest version, see [Node.js](https://nodejs.org/).
-Finally, you'll also need to download the sample to use during the quickstart. The sample application is **Azure Digital Twins Explorer**. This sample contains the app you use in the quickstart to load and explore an Azure Digital Twins scenario. It also contains the sample scenario files. To get the sample, go to [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Browse code** button underneath the title, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a *.ZIP* file.
+Finally, you'll also need to download the sample to use during the quickstart. The sample application is **Azure Digital Twins Explorer**. This sample contains the app you use in the quickstart to load and explore an Azure Digital Twins scenario. It also contains the sample scenario files. To get the sample, go to [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/). Select the **Browse code** button underneath the title, which will take you to the GitHub repo for the sample. Select the **Code** button and **Download ZIP** to download the sample as a .zip file.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/download-repo-zip.png" alt-text="Screenshot of the digital-twins-explorer repo on GitHub. The Code button is selected, producing a small dialog box where the Download ZIP button is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/download-repo-zip.png":::
In this quickstart, you made the temperature update manually. It's common in Azu
To wrap up the work for this quickstart, first end the running console app. This action shuts off the connection to the Azure Digital Twins Explorer app in the browser. You'll no longer be able to view live data in the browser. You can close the browser tab.
-Then, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+Then, you can choose which resources you want to remove, depending on what you want to do next.
* **If you plan to continue to the Azure Digital Twins tutorials**, you can reuse the instance in this quickstart for those articles, and you don't need to remove it.
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
What you need to begin:
Once you are ready to go with your Azure Digital Twins instance, start setting up the client app project.
-Open a command prompt or other console window on your machine, and create an empty project directory where you would like to store your work during this tutorial. Name the directory whatever you would like (for example, *DigitalTwinsCodeTutorial*).
+Open a command prompt or other console window on your machine, and create an empty project directory where you want to store your work during this tutorial. Name the directory whatever you want (for example, *DigitalTwinsCodeTutorial*).
Navigate into the new directory.
Next, you'll add code to this file to fill out some functionality.
The first thing your app will need to do is authenticate against the Azure Digital Twins service. Then, you can create a service client class to access the SDK functions.
-In order to authenticate, you need the *hostName* of your Azure Digital Twins instance.
+In order to authenticate, you need the *host name* of your Azure Digital Twins instance.
In *Program.cs*, paste the following code below the "Hello, World!" printout line in the `Main` method.
-Set the value of `adtInstanceUrl` to your Azure Digital Twins instance *hostName*.
+Set the value of `adtInstanceUrl` to your Azure Digital Twins instance *host name*.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Authentication_code":::
Azure Digital Twins has no intrinsic domain vocabulary. The types of elements in
The first step in creating an Azure Digital Twins solution is defining at least one model in a DTDL file.
-In the directory where you created your project, create a new *.json* file called *SampleModel.json*. Paste in the following file body:
+In the directory where you created your project, create a new .json file called *SampleModel.json*. Paste in the following file body:
:::code language="json" source="~/digital-twins-docs-samples/models/SampleModel.json":::
At this point in the tutorial, you have a complete client app, capable of perfor
## Clean up resources
-After completing this tutorial, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+After completing this tutorial, you can choose which resources you want to remove, depending on what you want to do next.
* **If you plan to continue to the next tutorial**, the instance used in this tutorial can be reused in the next one. You can keep the Azure Digital Twins resources you set up here and skip the rest of this section.
digital-twins Tutorial Command Line App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-app.md
In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the a **sample command-line client application** for interacting with an Azure Digital Twins instance. The client app is similar to the one written in [Tutorial: Code a client app](tutorial-code.md).
-You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [code of the sample](https://github.com/Azure-Samples/digital-twins-samples/tree/master/) to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you would like.
+You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [code of the sample](https://github.com/Azure-Samples/digital-twins-samples/tree/master/) to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you want.
In this tutorial, you will... > [!div class="checklist"]
Run the following commands in the running project console to answer some questio
Query ```
- This allows you to take stock of your environment at a glance, and make sure everything is represented as you'd like it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
+ This allows you to take stock of your environment at a glance, and make sure everything is represented as you want it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
:::image type="content" source="media/tutorial-command-line/app/output-query-all.png" alt-text="Screenshot showing a partial result from the twin query, including room0 and floor1.":::
Run the following commands in the running project console to answer some questio
## Clean up resources
-After completing this tutorial, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+After completing this tutorial, you can choose which resources you want to remove, depending on what you want to do next.
* **If you plan to continue to the next tutorial**, you can keep the resources you set up here to continue using this Azure Digital Twins instance and configured sample app for the next tutorial
-* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the sample app's `DeleteAllTwins` and `DeleteAllModels` commands to clear the twins and models in your instance, respectively.
+* **If you want to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the sample app's `DeleteAllTwins` and `DeleteAllModels` commands to clear the twins and models in your instance, respectively.
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
Run the following queries in the Cloud Shell to answer some questions about the
az dt twin query -n <ADT_instance_name> -q "SELECT * FROM DIGITALTWINS" ```
- This allows you to take stock of your environment at a glance, and make sure everything is represented as you'd like it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
+ This allows you to take stock of your environment at a glance, and make sure everything is represented as you want it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
:::image type="content" source="media/tutorial-command-line/cli/output-query-all.png" alt-text="Screenshot of Cloud Shell showing partial result of twin query, including room0 and room1." lightbox="media/tutorial-command-line/cli/output-query-all.png":::
Run the following queries in the Cloud Shell to answer some questions about the
## Clean up resources
-After completing this tutorial, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+After completing this tutorial, you can choose which resources you want to remove, depending on what you want to do next.
* **If you plan to continue to the next tutorial**, you can keep the resources you set up here and reuse the Azure Digital Twins instance without clearing anything in between.
-* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the [az dt twin relationship delete](/cli/azure/dt/twin/relationship#az_dt_twin_relationship_delete), [az dt twin delete](/cli/azure/dt/twin#az_dt_twin_delete), and [az dt model delete](/cli/azure/dt/model#az_dt_model_delete) commands to clear the relationships, twins, and models in your instance, respectively.
+* **If you want to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the [az dt twin relationship delete](/cli/azure/dt/twin/relationship#az_dt_twin_relationship_delete), [az dt twin delete](/cli/azure/dt/twin#az_dt_twin_delete), and [az dt model delete](/cli/azure/dt/model#az_dt_model_delete) commands to clear the relationships, twins, and models in your instance, respectively.
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
In the *Solution Explorer* pane, expand _**SampleFunctionsApp** > Dependencies_.
:::image type="content" source="media/tutorial-end-to-end/update-dependencies-1.png" alt-text="Visual Studio: Manage NuGet Packages for the SampleFunctionsApp project" border="false":::
-This will open the NuGet Package Manager. Select the *Updates* tab and if there are any packages to be updated, check the box to *Select all packages*. Then hit *Update*.
+This will open the NuGet Package Manager. Select the *Updates* tab and if there are any packages to be updated, check the box to *Select all packages*. Then select *Update*.
:::image type="content" source="media/tutorial-end-to-end/update-dependencies-2.png" alt-text="Visual Studio: Selecting to update all packages in the NuGet Package Manager":::
The second setting creates an **environment variable** for the function with the
Run the command below, filling in the placeholders with the details of your resources. ```azurecli-interactive
-az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=https://<your-Azure-Digital-Twins-instance-hostname>"
+az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=https://<your-Azure-Digital-Twins-instance-host-name>"
``` The output is the list of settings for the Azure Function, which should now contain an entry called **ADT_SERVICE_URL**.
Fill in the fields as follows (fields filled by default are not mentioned):
* *TOPIC DETAILS* > **System Topic Name**: Give a name to use for the system topic. * *EVENT TYPES* > **Filter to Event Types**: Select *Device Telemetry* from the menu options. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options.
-* *ENDPOINT DETAILS* > **Endpoint**: Hit the *Select an endpoint* link. This will open a *Select Azure Function* window:
+* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link. This will open a *Select Azure Function* window:
:::image type="content" source="media/tutorial-end-to-end/event-subscription-3.png" alt-text="Azure portal event subscription: select Azure function" border="false"::: - Fill in your **Subscription**, **Resource group**, **Function app** and **Function** (*ProcessHubToDTEvents*). Some of these may auto-populate after selecting the subscription.
- - Hit **Confirm Selection**.
+ - Select **Confirm Selection**.
-Back on the *Create Event Subscription* page, hit **Create**.
+Back on the *Create Event Subscription* page, select **Create**.
### Register the simulated device with IoT Hub
The steps to create this event subscription are similar to when you subscribed t
On the *Create Event Subscription* page, fill in the fields as follows (fields filled by default are not mentioned): * *EVENT SUBSCRIPTION DETAILS* > **Name**: Give a name to your event subscription. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options.
-* *ENDPOINT DETAILS* > **Endpoint**: Hit the *Select an endpoint* link. This will open a *Select Azure Function* window:
+* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link. This will open a *Select Azure Function* window:
- Fill in your **Subscription**, **Resource group**, **Function app** and **Function** (*ProcessDTRoutedData*). Some of these may auto-populate after selecting the subscription.
- - Hit **Confirm Selection**.
+ - Select **Confirm Selection**.
-Back on the *Create Event Subscription* page, hit **Create**.
+Back on the *Create Event Subscription* page, select **Create**.
### Run the simulation and see the results
Here is a review of the scenario that you built out in this tutorial.
## Clean up resources
-After completing this tutorial, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+After completing this tutorial, you can choose which resources you want to remove, depending on what you want to do next.
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
-* **If you'd like to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/dt) CLI commands in an [Azure Cloud Shell](https://shell.azure.com) window to delete the elements you'd like to remove.
+* **If you want to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/dt) CLI commands in an [Azure Cloud Shell](https://shell.azure.com) window to delete the elements you want to remove.
This option will not remove any of the other Azure resources created in this tutorial (IoT Hub, Azure Functions app, etc.). You can delete these individually using the [dt commands](/cli/azure/reference-index) appropriate for each resource type.
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
The following tables show which migration scenarios are supported when using Azu
> [!NOTE] > If a scenario listed as supported below does not appear within the user interface, please contact the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias for additional information.
-> [!IMPORTANT]
-> To view all scenarios currently supported by Azure Database Migration Service in Private Preview, see the [DMS Preview site](https://aka.ms/dms-preview).
- ### Offline (one-time) migration support The following table shows Azure Database Migration Service support for offline migrations.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/deploy.md
sample as a starter.
1. From the **Getting started** page on the left, select the **Create** button under _Create a blueprint_.
-1. Find the **Azure Security Benchmark Foundation** blueprint sample under _Other Samples_ and select **Use this
- sample**.
+1. Find the **Azure Security Benchmark Foundation** blueprint sample under _Other Samples_ and
+ select **Use this sample**.
1. Enter the _Basics_ of the blueprint sample:
- - **Blueprint name**: Provide a name for your copy of the Azure Security Benchmark Foundation blueprint sample.
+ - **Blueprint name**: Provide a name for your copy of the Azure Security Benchmark Foundation
+ blueprint sample.
- **Definition location**: Use the ellipsis and select the management group to save your copy of the sample to.
to make each deployment of the copy of the blueprint sample unique.
The parameters defined in this section are used by many of the artifacts in the blueprint definition to provide consistency.
-
- - **Prefix for resources and resource groups**: This string is used as a prefix for all resource and resource group names
+
+ - **Prefix for resources and resource groups**: This string is used as a prefix for all
+ resource and resource group names
- **Hub name**: Name for the hub
- - **Log retention (days)**: Number of days that logs are retained; entering '0' retains logs indefinitely
- - **Deploy hub**: Enter 'true' or 'false' to specify whether the assignment deploys the hub components of the architecture
+ - **Log retention (days)**: Number of days that logs are retained; entering '0' retains logs
+ indefinitely
+ - **Deploy hub**: Enter 'true' or 'false' to specify whether the assignment deploys the hub
+ components of the architecture
- **Hub location**: Location for the hub resource group
- - **Destination IP addresses**: Destination IP addresses for outbound connectivity; comma-separated list of IP addresses or IP range prefixes
+ - **Destination IP addresses**: Destination IP addresses for outbound connectivity;
+ comma-separated list of IP addresses or IP range prefixes
- **Network Watcher name**: Name for the Network Watcher resource - **Network Watcher resource group name**: Name for the Network Watcher resource group
- - **Enable DDoS protection**: Enter 'true' or 'false' to specify whether or not DDoS Protection is enabled in the virtual network
-
- > [!NOTE]
- > If Network Watcher is already enabled, it's recommended that you use the existing
- > Network Watcher resource group. You must also provide the location for the existing Network
- > Watcher resource group for the artifact parameter **Network Watcher resource group location**.
+ - **Enable DDoS protection**: Enter 'true' or 'false' to specify whether or not DDoS Protection
+ is enabled in the virtual network
+
+ > [!NOTE]
+ > If Network Watcher is already enabled, it's recommended that you use the existing Network
+ > Watcher resource group. You must also provide the location for the existing Network Watcher
+ > resource group for the artifact parameter **Network Watcher resource group location**.
- Artifact parameters
to make each deployment of the copy of the blueprint sample unique.
> [!WARNING] > The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
## Artifact parameters table
Watcher resource group location** specifies the existing Network Watcher resourc
## Next steps
-Now that you've reviewed the steps to deploy the Azure Security Benchmark Foundation blueprint sample, visit the
-following article to learn about the architecture:
+Now that you've reviewed the steps to deploy the Azure Security Benchmark Foundation blueprint
+sample, visit the following article to learn about the architecture:
> [!div class="nextstepaction"] > [Azure Security Benchmark Foundation blueprint - Overview](./index.md)
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/index.md
foundation. This environment is composed of:
enable encrypted traffic between an Azure virtual network and an on-premises location over the public Internet.
-> [!NOTE]
+> [!NOTE]
> The Azure Security Benchmark Foundation lays out a foundational architecture for > workloads. The architecture diagram above includes several notional resources to demonstrate > potential use of subnets. You still need to deploy workloads on this foundational architecture. ## Next steps
-You've reviewed the overview and architecture of the Azure Security Benchmark Foundation blueprint sample.
+You've reviewed the overview and architecture of the Azure Security Benchmark Foundation blueprint
+sample.
> [!div class="nextstepaction"] > [Azure Security Benchmark Foundation blueprint - Deploy steps](./deploy.md)
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-foundation/deploy.md
to make each deployment of the copy of the blueprint sample unique.
- Lock Assignment
- Select the blueprint lock setting for your environment. For more information, see [blueprints resource locking](../../concepts/resource-locking.md).
+ Select the blueprint lock setting for your environment. For more information, see
+ [blueprints resource locking](../../concepts/resource-locking.md).
- Managed Identity
to make each deployment of the copy of the blueprint sample unique.
- **Organization**: Enter your organization name, such as Contoso, must be unique. - **Azure Region**: Select the Azure Region for Deployment. - **Allowed locations**: Which Azure Regions will you allow resources to be built in?
-
+ - Artifact parameters The parameters defined in this section apply to the artifact under which it's defined. These
to make each deployment of the copy of the blueprint sample unique.
> [!WARNING] > The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
## Artifact parameters table
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-foundation/index.md
enterprise-ready foundation. This environment is composed of:
- Allowed Azure Region for Resources and Resource Groups - Allowed Storage Account SKUs (choose while deploying) - Allowed Azure VM SKUs (choose while deploying)
- - Require Network Watcher to be deployed
+ - Require Network Watcher to be deployed
- Require Azure Storage Account Secure transfer Encryption
- - Deny resource types (choose while deploying)
+ - Deny resource types (choose while deploying)
- Policy initiatives: - Enable Monitoring in Azure Security Center (100+ policy definitions)
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-migrate-landing-zone/deploy.md
provided to make each deployment of the copy of the blueprint sample unique.
[managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md). - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint sample.
-
+ - Lock Assignment
- Select the blueprint lock setting for your environment. For more information, see [blueprints resource locking](../../concepts/resource-locking.md).
+ Select the blueprint lock setting for your environment. For more information, see
+ [blueprints resource locking](../../concepts/resource-locking.md).
- Managed Identity
provided to make each deployment of the copy of the blueprint sample unique.
The parameters defined in this section are used by many of the artifacts in the blueprint definition to provide consistency.
- - **Organization**: Enter your organization name such as Contoso or Fabrikam, must be unique.
- - **AzureRegion**: Select one Azure Region for Deployment.
-
+ - **Organization**: Enter your organization name such as Contoso or Fabrikam, must be unique.
+ - **AzureRegion**: Select one Azure Region for Deployment.
+ - Artifact parameters The parameters defined in this section apply to the artifact under which it's defined. These
provided to make each deployment of the copy of the blueprint sample unique.
> [!WARNING] > The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
## Artifact parameters table
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-migrate-landing-zone/index.md
enterprise-ready governance. This environment is composed of:
an isolated network and subnets for your virtual machine. - Deploy [Azure Migrate Project](../../../../migrate/migrate-services-overview.md) for discovery and assessment. We're adding the tools for Server assessment, Server migration, Database assessment,
- and Database migration.
-
+ and Database migration.
All these elements abide to the proven practices published in the [Azure Architecture Center - Reference Architectures](/azure/architecture/reference-architectures/).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/canada-federal-pbmm/control-mapping.md
Title: Canada Federal PBMM blueprint sample controls description: Control mapping of the Canada Federal PBMM blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Control mapping of the Canada Federal PBMM blueprint sample
appropriate action to ensure account management requirements are met.
- External accounts with read permissions should be removed from your subscription - External accounts with write permissions should be removed from your subscription - ## AC-2 (7) Account Management | Role-Based Schemes Azure implements
monitor and enforce use of advanced data security on SQL server.
## AC-17 (1) Remote Access | Automated Monitoring / Control
-This blueprint helps you monitor and control remote access by assigning [Azure Policy](../../../policy/overview.md)
-definitions to monitor that remote debugging for Azure App Service application is turned off. The
-blueprint also assigns policy definitions that audit Linux virtual machines that allow remote
-connections from accounts without passwords. Additionally, the blueprint assigns an Azure Policy
-definition that helps you monitor unrestricted access to storage accounts. Monitoring these
-indicators can help you ensure remote access methods comply with your security policy.
+This blueprint helps you monitor and control remote access by assigning
+[Azure Policy](../../../policy/overview.md) definitions to monitor that remote debugging for Azure
+App Service application is turned off. The blueprint also assigns policy definitions that audit
+Linux virtual machines that allow remote connections from accounts without passwords. Additionally,
+the blueprint assigns an Azure Policy definition that helps you monitor unrestricted access to
+storage accounts. Monitoring these indicators can help you ensure remote access methods comply with
+your security policy.
- Show audit results from Linux VMs that allow remote connections from accounts without passwords - Storage accounts should restrict network access
all virtual machine user accounts comply with your organization's password polic
- Show audit results from Windows VMs that do not have a maximum password age of 70 days - Show audit results from Windows VMs that do not have a minimum password age of 1 day - Show audit results from Windows VMs that do not have the password complexity setting enabled-- Show audit results from Windows VMs that do not restrict the minimum password length to 14 characters
+- Show audit results from Windows VMs that do not restrict the minimum password length to 14
+ characters
## IA-8 (100) Identification and Authentication (Non-Organizational Users) | Identity and Credential Assurance Levels
you can take appropriate action.
- Deploy Threat Detection on SQL servers > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/canada-federal-pbmm/deploy.md
Title: Deploy Canada Federal PBMM blueprint sample description: Deploy steps for the Canada Federal PBMM blueprint sample including blueprint artifact parameter details. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Deploy the Canada Federal PBMM blueprint samples
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/canada-federal-pbmm/index.md
Title: Canada Federal PBMM blueprint sample overview description: Overview of the Canada Federal PBMM blueprint sample. This blueprint sample helps customers assess specific Canada Federal PBMM controls. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Overview of the Canada Federal PBMM blueprint sample
Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM) blueprint sample provides a set of governance guard-rails using [Azure Policy](../../../policy/overview.md) that help towards [Canada Federal PBMM](https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/cloud-services/government-canada-security-control-profile-cloud-based-it-services.html)
-attestation.
+attestation.
## Blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/control-mapping.md
# Control mapping of the DoD Impact Level 4 blueprint sample
-The following article details how the Azure Blueprints Department of Defense Impact Level 4 (DoD IL4) blueprint sample maps to the
-DoD Impact Level 4 controls. For more information about the controls, see
-[DoD Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/pdf/Cloud_Computing_SRG_v1r3.pdf).
-The Defense Information Systems Agency (DISA) is an agency of the US Department of Defense (DoD) that is responsible for developing and maintaining the DoD Cloud Computing Security Requirements Guide (SRG). The SRG defines the baseline security requirements for cloud service providers (CSPs) that host DoD information, systems, and applications, and for DoD's use of cloud services.
+The following article details how the Azure Blueprints Department of Defense Impact Level 4 (DoD
+IL4) blueprint sample maps to the DoD Impact Level 4 controls. For more information about the
+controls, see
+[DoD Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/pdf/Cloud_Computing_SRG_v1r3.pdf).
+The Defense Information Systems Agency (DISA) is an agency of the US Department of Defense (DoD)
+that is responsible for developing and maintaining the DoD Cloud Computing Security Requirements
+Guide (SRG). The SRG defines the baseline security requirements for cloud service providers (CSPs)
+that host DoD information, systems, and applications, and for DoD's use of cloud services.
The following mappings are to the **DoD Impact Level 4** controls. Use the navigation on the right to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
appropriate action to ensure account management requirements are met.
## AC-2 (7) Account Management | Role-Based Schemes
-Azure implements [Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md)
-to help you manage who has access to resources in Azure. Using the Azure portal, you can
-review who has access to Azure resources and their permissions. This blueprint also assigns
+Azure implements
+[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
+help you manage who has access to resources in Azure. Using the Azure portal, you can review who has
+access to Azure resources and their permissions. This blueprint also assigns
[Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active Directory authentication for SQL Servers and Service Fabric. Using Azure Active Directory authentication enables simplified permission management and centralized identity management of database users and other Microsoft services. Additionally, this blueprint assigns an Azure Policy definition to audit
-the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are implement can help you
-verify need and proper implementation, as custom Azure RBAC rules are error prone.
+the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are implement can
+help you verify need and proper implementation, as custom Azure RBAC rules are error prone.
- An Azure Active Directory administrator should be provisioned for SQL servers - Audit usage of custom RBAC rules
virtual machine administrator permissions can help you implement appropriate sep
- A maximum of 3 owners should be designated for your subscription - Audit Windows VMs in which the Administrators group contains any of the specified members - Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of the specified members
+- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of
+ the specified members
- There should be more than one owner assigned to your subscription ## AC-6 (7) Least Privilege | Review of User Privileges
implemented.
- A maximum of 3 owners should be designated for your subscription - Audit Windows VMs in which the Administrators group contains any of the specified members - Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of the specified members
+- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of
+ the specified members
- There should be more than one owner assigned to your subscription ## AC-17 (1) Remote Access | Automated Monitoring / Control
Policy definition that helps you monitor unrestricted access to storage accounts
indicators can help you ensure remote access methods comply with your security policy. - \[Preview\]: Audit Linux VMs that allow remote connections from accounts without passwords-- \[Preview\]: Deploy requirements to audit Linux VMs that allow remote connections from accounts without passwords
+- \[Preview\]: Deploy requirements to audit Linux VMs that allow remote connections from accounts
+ without passwords
- Audit unrestricted network access to storage accounts - Remote debugging should be turned off for API App - Remote debugging should be turned off for Function App
configured on SQL Servers.
- Advanced data security should be enabled on your SQL servers - Advanced data security should be enabled on your SQL managed instances - Advanced Threat Protection types should be set to 'All' in SQL server Advanced Data Security settings-- Advanced Threat Protection types should be set to 'All' in SQL managed instance Advanced Data Security settings
+- Advanced Threat Protection types should be set to 'All' in SQL managed instance Advanced Data
+ Security settings
- Auditing should be enabled on advanced data security settings on SQL Server-- Email notifications to admins and subscription owners should be enabled in SQL server advanced data security settings-- Email notifications to admins and subscription owners should be enabled in SQL managed instance advanced data security settings-- Advanced data security settings for SQL server should contain an email address to receive security alerts-- Advanced data security settings for SQL managed instance should contain an email address to receive security alerts
+- Email notifications to admins and subscription owners should be enabled in SQL server advanced
+ data security settings
+- Email notifications to admins and subscription owners should be enabled in SQL managed instance
+ advanced data security settings
+- Advanced data security settings for SQL server should contain an email address to receive security
+ alerts
+- Advanced data security settings for SQL managed instance should contain an email address to
+ receive security alerts
## AU-3 (2) Content of Audit Records | Centralized Management of Planned Audit Record Content
Adaptive application control in Azure Security Center is an intelligent, automat
application allow list solution that can block or prevent specific software from running on your virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allow list is recommended but
-has not yet been configured.
+definition that helps you monitor virtual machines where an application allow list is recommended
+but has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines
identification and authentication policy.
- \[Preview\]: Audit Linux VMs that do not have the passwd file permissions set to 0644 - \[Preview\]: Audit Linux VMs that have accounts without passwords - \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Linux VMs that do not have the passwd file permissions set to 0644
+- \[Preview\]: Deploy requirements to audit Linux VMs that do not have the passwd file permissions
+ set to 0644
- \[Preview\]: Deploy requirements to audit Linux VMs that have accounts without passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible encryption
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
+ encryption
## IA-5 (1) Authenticator Management | Password-Based Authentication
-This blueprint helps you enforce strong passwords by assigning [Azure Policy](../../../policy/overview.md)
-definitions that audit Windows virtual machines that don't enforce minimum strength and other
-password requirements. Awareness of virtual machines in violation of the password strength policy
-helps you take corrective actions to ensure passwords for all virtual machine user accounts comply
-with your organization's password policy.
+This blueprint helps you enforce strong passwords by assigning
+[Azure Policy](../../../policy/overview.md) definitions that audit Windows virtual machines that
+don't enforce minimum strength and other password requirements. Awareness of virtual machines in
+violation of the password strength policy helps you take corrective actions to ensure passwords for
+all virtual machine user accounts comply with your organization's password policy.
- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords - \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days
with your organization's password policy.
- \[Preview\]: Audit Windows VMs that do not have the password complexity setting enabled - \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters - \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a minimum password age of 1 day-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have the password complexity setting enabled-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible encryption
+- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
+ passwords
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
+ 70 days
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a minimum password age of 1
+ day
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not have the password complexity
+ setting enabled
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
+ length to 14 characters
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
+ encryption
## IR-6 (2) Incident Reporting | Vulnerabilities Related to Incidents This blueprint provides policy definitions that audit records with analysis of vulnerability assessment on virtual machines, virtual machine scale sets, and SQL servers. These insights provide
-real-time information about the security state of your deployed resources and can help you prioritize
-remediation actions.
+real-time information about the security state of your deployed resources and can help you
+prioritize remediation actions.
- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated - Vulnerabilities should be remediated by a Vulnerability Assessment solution
remediation actions.
## RA-5 Vulnerability Scanning
-This blueprint helps you manage information system vulnerabilities by assigning [Azure Policy](../../../policy/overview.md)
-definitions that monitor operating system vulnerabilities, SQL vulnerabilities, and virtual machine
-vulnerabilities in Azure Security Center. Azure Security Center provides reporting capabilities that
-enable you to have real-time insight into the security state of deployed Azure resources. This
-blueprint also assigns policy definitions that audit and enforce Advanced Data Security on SQL
-servers. Advanced data security included vulnerability assessment and advanced threat protection
-capabilities to help you understand vulnerabilities in your deployed resources.
+This blueprint helps you manage information system vulnerabilities by assigning
+[Azure Policy](../../../policy/overview.md) definitions that monitor operating system
+vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security Center.
+Azure Security Center provides reporting capabilities that enable you to have real-time insight into
+the security state of deployed Azure resources. This blueprint also assigns policy definitions that
+audit and enforce Advanced Data Security on SQL servers. Advanced data security included
+vulnerability assessment and advanced threat protection capabilities to help you understand
+vulnerabilities in your deployed resources.
- Advanced data security should be enabled on your managed instances - Advanced data security should be enabled on your SQL servers
capabilities to help you understand vulnerabilities in your deployed resources.
Azure's distributed denial of service (DDoS) standard tier provides additional features and mitigation capabilities over the basic service tier. These additional features include Azure Monitor
-integration and the ability to review post-attack mitigation reports. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that audits if the DDoS standard tier is enabled. Understanding the capability difference
-between the service tiers can help you select the best solution to address denial of service
-protections for your Azure environment.
+integration and the ability to review post-attack mitigation reports. This blueprint assigns an
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+enabled. Understanding the capability difference between the service tiers can help you select the
+best solution to address denial of service protections for your Azure environment.
- DDoS Protection Standard should be enabled ## SC-7 Boundary Protection
-This blueprint helps you manage and control the system boundary by assigning an [Azure Policy](../../../policy/overview.md)
-definition that monitors for network security group hardening recommendations in Azure
-Security Center. Azure Security Center analyzes traffic patterns of Internet facing virtual machines
-and provides network security group rule recommendations to reduce the potential attack surface.
-Additionally, this blueprint also assigns policy definitions that monitor unprotected
-endpoints, applications, and storage accounts. Endpoints and applications that aren't protected by a
-firewall, and storage accounts with unrestricted access can allow unintended access to information
-contained within the information system.
+This blueprint helps you manage and control the system boundary by assigning an
+[Azure Policy](../../../policy/overview.md) definition that monitors for network security group
+hardening recommendations in Azure Security Center. Azure Security Center analyzes traffic patterns
+of Internet facing virtual machines and provides network security group rule recommendations to
+reduce the potential attack surface. Additionally, this blueprint also assigns policy definitions
+that monitor unprotected endpoints, applications, and storage accounts. Endpoints and applications
+that aren't protected by a firewall, and storage accounts with unrestricted access can allow
+unintended access to information contained within the information system.
- Network Security Group Rules for Internet facing virtual machines should be hardened - Access through Internet facing endpoint should be restricted
virtual machines that can support just-in-time access but haven't yet been confi
Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines, reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual machine access helps you manage exceptions to your traffic flow policy by facilitating the access
-request and approval processes. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines that can support just-in-time access but haven't
-yet been configured.
+request and approval processes. This blueprint assigns an
+[Azure Policy](../../../policy/overview.md) definition that helps you monitor virtual machines that
+can support just-in-time access but haven't yet been configured.
- Just-In-Time network access control should be applied on virtual machines ## SC-8 (1) Transmission Confidentiality and Integrity | Cryptographic or Alternate Physical Protection
-This blueprint helps you protect the confidential and integrity of transmitted information by
-assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
-cryptographic mechanism implemented for communications protocols. Ensuring communications are
-properly encrypted can help you meet your organization's requirements or protecting information
-from unauthorized disclosure and modification.
+This blueprint helps you protect the confidential and integrity of transmitted information by
+assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
+cryptographic mechanism implemented for communications protocols. Ensuring communications are
+properly encrypted can help you meet your organization's requirements or protecting information from
+unauthorized disclosure and modification.
- API App should only be accessible over HTTPS - Audit Windows web servers that are not using secure communication protocols
encryption on SQL databases, virtual machine disks, and automation account varia
## SI-2 Flaw Remediation
-This blueprint helps you manage information system flaws by assigning [Azure Policy](../../../policy/overview.md)
-definitions that monitor missing system updates, operating system vulnerabilities, SQL
-vulnerabilities, and virtual machine vulnerabilities in Azure Security Center. Azure Security Center
-provides reporting capabilities that enable you to have real-time insight into the security state of
-deployed Azure resources. This blueprint also assigns a policy definition that ensures patching
-of the operating system for virtual machine scale sets.
+This blueprint helps you manage information system flaws by assigning
+[Azure Policy](../../../policy/overview.md) definitions that monitor missing system updates,
+operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure
+Security Center. Azure Security Center provides reporting capabilities that enable you to have
+real-time insight into the security state of deployed Azure resources. This blueprint also assigns a
+policy definition that ensures patching of the operating system for virtual machine scale sets.
- Require automatic OS image patching on Virtual Machine Scale Sets - System updates on virtual machine scale sets should be installed
and virtual machines, providing threat intelligence, anomaly detection, and beha
Azure Security Center. - Email notification to subscription owner for high severity alerts should be enabled-- A security contact email address should be provided for your subscription -- Email notifications to admins and subscription owners should be enabled in SQL managed instance advanced data security settings -- Email notifications to admins and subscription owners should be enabled in SQL server advanced data security settings
+- A security contact email address should be provided for your subscription
+- Email notifications to admins and subscription owners should be enabled in SQL managed instance
+ advanced data security settings
+- Email notifications to admins and subscription owners should be enabled in SQL server advanced
+ data security settings
- A security contact phone number should be provided for your subscription-- Advanced data security settings for SQL server should contain an email address to receive security alerts
+- Advanced data security settings for SQL server should contain an email address to receive security
+ alerts
- Security Center standard pricing tier should be selected ## SI-4 (18) Information System Monitoring | Analyze Traffic / Covert Exfiltration
exfiltration of information.
- Deploy Advanced Threat Protection on Storage Accounts > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
-Now that you've reviewed the control mapping of the DoD Impact Level 4 blueprint, visit the following
-articles to learn about the blueprint and how to deploy this sample:
+Now that you've reviewed the control mapping of the DoD Impact Level 4 blueprint, visit the
+following articles to learn about the blueprint and how to deploy this sample:
> [!div class="nextstepaction"] > [DoD Impact Level 4 blueprint - Overview](./index.md)
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/control-mapping.md
controls, see
The Defense Information Systems Agency (DISA) is an agency of the US Department of Defense (DoD) that is responsible for developing and maintaining the DoD Cloud Computing Security Requirements Guide (SRG). The SRG defines the baseline security requirements for cloud service providers (CSPs)
-that host DoD information, systems, and applications, and for DoD's use of cloud services.
+that host DoD information, systems, and applications, and for DoD's use of cloud services.
The following mappings are to the **DoD Impact Level 5** controls. Use the navigation on the right to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
all virtual machine user accounts comply with your organization's password polic
This blueprint provides policy definitions that audit records with analysis of vulnerability assessment on virtual machines, virtual machine scale sets, and SQL servers. These insights provide
-real-time information about the security state of your deployed resources and can help you prioritize
-remediation actions.
+real-time information about the security state of your deployed resources and can help you
+prioritize remediation actions.
- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated - Vulnerabilities should be remediated by a Vulnerability Assessment solution
Security Center.
- A security contact phone number should be provided for your subscription > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/control-mapping.md
exfiltration of information.
- Deploy Advanced Threat Protection on Storage Accounts > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/deploy.md
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|MFA should be enabled on accounts with write permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).| |\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Long-term geo-redundant backup should be enabled for Azure SQL Databases|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).| - ## Next steps Now that you've reviewed the steps to deploy the FedRAMP High blueprint sample, visit the following
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md
you can take appropriate action.
- Deploy Threat Detection on SQL servers > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/index.md
quality and ready to deploy today to assist you in meeting your various complian
The CAF foundation and the CAF Migrate landing zone blueprints assume that the customer is preparing an existing clean single subscription for migrating on-premises assets and workloads in to Azure.
-(Region A and B in the figure).
+(Region A and B in the figure).
There's an opportunity to iterate on the sample blueprints and look for patterns of customizations that a customer is applying. There is also an opportunity to proactively address blueprints that are
customizing each for their unique needs.
- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md). - Learn how to [update existing assignments](../how-to/update-existing-assignments.md).-- Resolve issues during the assignment of a blueprint with [general troubleshooting](../troubleshoot/general.md).
+- Resolve issues during the assignment of a blueprint with
+ [general troubleshooting](../troubleshoot/general.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/control-mapping.md
exfiltration of information.
- Deploy Advanced Threat Protection on Storage Accounts > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/control-mapping.md
# Control mapping of the Australian Government ISM PROTECTED blueprint sample
-The following article details how the Azure Blueprints Australian Government ISM PROTECTED blueprint sample maps to the
-ISM PROTECTED controls. For more information about the controls, see
+The following article details how the Azure Blueprints Australian Government ISM PROTECTED blueprint
+sample maps to the ISM PROTECTED controls. For more information about the controls, see
[ISM PROTECTED](https://www.cyber.gov.au/ism).
-The following mappings are to the **ISM PROTECTED** controls. Use the navigation on the right to jump
-directly to a specific control mapping. Many of the mapped controls are implemented with an
+The following mappings are to the **ISM PROTECTED** controls. Use the navigation on the right to
+jump directly to a specific control mapping. Many of the mapped controls are implemented with an
[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit
-requirements** built-in policy initiative.
+**\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions
+to support audit requirements** built-in policy initiative.
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
requirements** built-in policy initiative.
> may change over time. To view the change history, see the > [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/control-mapping.md). - ## Location Constraints
-This blueprint helps you restrict the location for the deployment of all resources and resource groups to "Australia Central", "Australia Central2", "Australia East" and "Australia Southeast" by assigning following Azure Policy definitions:
+This blueprint helps you restrict the location for the deployment of all resources and resource
+groups to "Australia Central", "Australia Central2", "Australia East" and "Australia Southeast" by
+assigning following Azure Policy definitions:
-- Allowed locations (has been hard coded to "Australia Central", "Australia Central2", "Australia East" and "Australia Southeast")-- Allowed locations for resource groups (has been hard coded to "Australia Central", "Australia Central2", "Australia East" and "Australia Southeast")
+- Allowed locations (has been hard coded to "Australia Central", "Australia Central2", "Australia
+ East" and "Australia Southeast")
+- Allowed locations for resource groups (has been hard coded to "Australia Central", "Australia
+ Central2", "Australia East" and "Australia Southeast")
## Guidelines for Personnel Security - Access to systems and their resources
This blueprint helps you restrict the location for the deployment of all resourc
- A maximum of 3 owners should be designated for your subscription - There should be more than one owner assigned to your subscription-- Show audit results from Windows VMs in which the Administrators group contains any of the specified members-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the specified members
+- Show audit results from Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
+ specified members
### 1507 Privileged access to systems, applications and data repositories is validated when first requested and revalidated on an annual or more frequent basis -- Show audit results from Windows VMs in which the Administrators group contains any of the specified members-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the specified members
+- Show audit results from Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
+ specified members
### 1508 Privileged access to systems, applications and data repositories is limited to that required for personnel to undertake their duties - A maximum of 3 owners should be designated for your subscription - There should be more than one owner assigned to your subscription-- Show audit results from Windows VMs in which the Administrators group contains any of the specified members-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the specified members
+- Show audit results from Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
+ specified members
- Just-In-Time network access control should be applied on virtual machines ### 0415 The use of shared user accounts is strictly controlled, and personnel using such accounts are uniquely identifiable -- Show audit results from Windows VMs in which the Administrators group contains any of the specified members-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the specified members
+- Show audit results from Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
+ specified members
### 0445 Privileged users are assigned a dedicated privileged account to be used solely for tasks requiring privileged access -- Show audit results from Windows VMs in which the Administrators group contains any of the specified members-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the specified members
+- Show audit results from Windows VMs in which the Administrators group contains any of the
+ specified members
+- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
+ specified members
### 0430 Access to systems, applications and data repositories is removed or suspended on the same day personnel no longer have a legitimate requirement for access
This blueprint helps you restrict the location for the deployment of all resourc
- Audit unrestricted network access to storage accounts - Service Fabric clusters should only use Azure Active Directory for client authentication - Show audit results from Linux VMs that allow remote connections from accounts without passwords-- Deploy prerequisites to audit Linux VMs that allow remote connections from accounts without passwords
+- Deploy prerequisites to audit Linux VMs that allow remote connections from accounts without
+ passwords
- Show audit results from Linux VMs that have accounts without passwords - Deploy prerequisites to audit Linux VMs that have accounts without passwords
This blueprint helps you restrict the location for the deployment of all resourc
- Vulnerabilities should be remediated by a Vulnerability Assessment solution - Vulnerabilities in security configuration on your machines should be remediated - Vulnerabilities in container security configurations should be remediated-- Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports
+- Vulnerability Assessment settings for SQL server should contain an email address to receive scan
+ reports
### 0940 Security vulnerabilities in applications and drivers assessed as high risk are patched, updated or mitigated within two weeks of the security vulnerability being identified by vendors, independent third parties, system managers or users
This blueprint helps you restrict the location for the deployment of all resourc
- Vulnerabilities should be remediated by a Vulnerability Assessment solution - Vulnerabilities in security configuration on your machines should be remediated - Vulnerabilities in container security configurations should be remediated-- Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports
+- Vulnerability Assessment settings for SQL server should contain an email address to receive scan
+ reports
### 1472 Security vulnerabilities in applications and drivers assessed as moderate or low risk are patched, updated or mitigated within one month of the security vulnerability being identified by vendors, independent third parties, system managers or users
This blueprint helps you restrict the location for the deployment of all resourc
- Vulnerabilities should be remediated by a Vulnerability Assessment solution - Vulnerabilities in security configuration on your machines should be remediated - Vulnerabilities in container security configurations should be remediated-- Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports
+- Vulnerability Assessment settings for SQL server should contain an email address to receive scan
+ reports
### 1494 Security vulnerabilities in operating systems and firmware assessed as extreme risk are patched, updated or mitigated within 48 hours of the security vulnerabilities being identified by vendors, independent third parties, system managers or users
This blueprint helps you restrict the location for the deployment of all resourc
- Vulnerabilities should be remediated by a Vulnerability Assessment solution - Vulnerabilities in security configuration on your machines should be remediated - Vulnerabilities in container security configurations should be remediated-- Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports
+- Vulnerability Assessment settings for SQL server should contain an email address to receive scan
+ reports
### 1495 Security vulnerabilities in operating systems and firmware assessed as high risk are patched, updated or mitigated within two weeks of the security vulnerability being identified by vendors, independent third parties, system managers or users
This blueprint helps you restrict the location for the deployment of all resourc
- Vulnerabilities should be remediated by a Vulnerability Assessment solution - Vulnerabilities in security configuration on your machines should be remediated - Vulnerabilities in container security configurations should be remediated-- Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports
+- Vulnerability Assessment settings for SQL server should contain an email address to receive scan
+ reports
### 1496 Security vulnerabilities in operating systems and firmware assessed as moderate or low risk are patched, updated or mitigated within one month of the security vulnerability being identified by vendors, independent third parties, system managers or users
This blueprint helps you restrict the location for the deployment of all resourc
- Vulnerabilities should be remediated by a Vulnerability Assessment solution - Vulnerabilities in security configuration on your machines should be remediated - Vulnerabilities in container security configurations should be remediated-- Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports
+- Vulnerability Assessment settings for SQL server should contain an email address to receive scan
+ reports
## Guidelines for System Management - Data backup and restoration
This blueprint helps you restrict the location for the deployment of all resourc
- Only secure connections to your Redis Cache should be enabled - Secure transfer to storage accounts should be enabled - Show audit results from Windows web servers that are not using secure communication protocols-- Deploy prerequisites to audit Windows web servers that are not using secure communication protocols
+- Deploy prerequisites to audit Windows web servers that are not using secure communication
+ protocols
## Guidelines for Database Systems Management - Database management system software
This blueprint helps you restrict the location for the deployment of all resourc
- Latest TLS version should be used in your API App - Latest TLS version should be used in your Web App - Latest TLS version should be used in your Function App-- Deploy prerequisites to audit Windows web servers that are not using secure communication protocols
+- Deploy prerequisites to audit Windows web servers that are not using secure communication
+ protocols
- Show audit results from Windows web servers that are not using secure communication protocols ## Guidelines for Data Transfers and Content Filtering - Content filtering
This blueprint helps you restrict the location for the deployment of all resourc
- DDoS Protection Standard should be enabled - > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> clouds.
## Next steps
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/deploy.md
away from alignment with ISM PROTECTED controls.
1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the ISM PROTECTED blueprint sample." Then select **Publish** at the bottom of the page.
+ modification later. Provide **Change notes** such as "First version published from the ISM
+ PROTECTED blueprint sample." Then select **Publish** at the bottom of the page.
## Assign the sample copy
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Remote debugging should be turned off for Web Application|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).| |\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerabilities in security configuration on your machines should be remediated |Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).| |\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|MFA should be enabled on accounts with read permissions on your subscription |Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md). |
-|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Enforce password history | Specifies limits on password reuse - how many times a new password must be created for a user account before the password can be repeated. |
-|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Maximum password age | Specifies the maximum number of days that may elapse before a user account password must be changed. The format of the value is two integers separated by a comma, denoting an inclusive range. |
+|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Enforce password history | Specifies limits on password reuse - how many times a new password must be created for a user account before the password can be repeated. |
+|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Maximum password age | Specifies the maximum number of days that may elapse before a user account password must be changed. The format of the value is two integers separated by a comma, denoting an inclusive range. |
|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Minimum password age | Specifies the minimum number of days that must elapse before a user account password can be changed. |
-|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Minimum password length | Specifies the minimum number of characters that a user account password may contain. |
+|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Minimum password length | Specifies the minimum number of characters that a user account password may contain. |
|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Password must meet complexity requirements|Specifies whether a user account password must be complex. If required, a complex password must not contain part of user's account name or full name; be at least 6 characters long; contain a mix of uppercase, lowercase, number, and non-alphabetic characters. | |\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerabilities in container security configurations should be remediated|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md). | |\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Remote debugging should be turned off for App Service|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md). |
The following table provides a list of the blueprint artifact parameters:
## Next steps
-Now that you've reviewed the steps to deploy the Australian Government ISM PROTECTED blueprint sample, visit the following
-articles to learn about the blueprint and control mapping:
+Now that you've reviewed the steps to deploy the Australian Government ISM PROTECTED blueprint
+sample, visit the following articles to learn about the blueprint and control mapping:
> [!div class="nextstepaction"] > [ISM PROTECTED blueprint - Overview](./index.md)
Addition articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/index.md
# Overview of the Australian Government ISM PROTECTED blueprint sample
-ISM Governance blueprint sample provides a set of governance guard-rails using [Azure Policy](../../../policy/overview.md) which help towards ISM PROTECTED attestation (Feb 2020 version). This Blueprint helps customers deploy a core set of policies for any Azure-deployed architecture requiring accreditation or compliance with the ISM framework.
+ISM Governance blueprint sample provides a set of governance guard-rails using
+[Azure Policy](../../../policy/overview.md) which help towards ISM PROTECTED attestation (Feb 2020
+version). This Blueprint helps customers deploy a core set of policies for any Azure-deployed
+architecture requiring accreditation or compliance with the ISM framework.
## Control mapping
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md
Title: ISO 27001 ASE/SQL workload blueprint sample controls description: Control mapping of the ISO 27001 App Service Environment/SQL Database workload blueprint sample to Azure Policy and Azure RBAC. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Control mapping of the ISO 27001 ASE/SQL workload blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/deploy.md
Title: Deploy ISO 27001 ASE/SQL workload blueprint sample description: Deploy steps of the ISO 27001 App Service Environment/SQL Database workload blueprint sample including blueprint artifact parameter details. Previously updated : 04/23/2021 Last updated : 04/30/2021 # Deploy the ISO 27001 App Service Environment/SQL Database workload blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/index.md
Title: ISO 27001 ASE/SQL workload blueprint sample overview description: Overview and architecture of the ISO 27001 App Service Environment/SQL Database workload blueprint sample. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Overview of the ISO 27001 App Service Environment/SQL Database workload blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md
Title: ISO 27001 Shared Services blueprint sample controls description: Control mapping of the ISO 27001 Shared Services blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Control mapping of the ISO 27001 Shared Services blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/deploy.md
Title: Deploy ISO 27001 Shared Services blueprint sample description: Deploy steps for the ISO 27001 Shared Services blueprint sample including blueprint artifact parameter details. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Deploy the ISO 27001 Shared Services blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/index.md
Title: ISO 27001 Shared Services blueprint sample overview description: Overview and architecture of the ISO 27001 Shared Services blueprint sample. This blueprint sample helps customers assess specific ISO 27001 controls. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Overview of the ISO 27001 Shared Services blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/index.md
Media blueprint sample provides a set of governance guard-rails using [Azure Policy](../../../policy/overview.md) that help towards [Media](https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/https://docsupdatetracker.net/index.html)
-attestation.
+attestation.
## Blueprint sample
-The blueprint sample helps customers deploy a core set of policies for any
-Azure-deployed architecture requiring accreditation or compliance with the Media
-framework. The [control mapping](./control-mapping.md) section provides details on policies included
-within this initiative and how these policies help meet various controls defined by Media framework. When assigned to an architecture, resources are evaluated by Azure Policy for compliance with assigned policies.
+The blueprint sample helps customers deploy a core set of policies for any Azure-deployed
+architecture requiring accreditation or compliance with the Media framework. The
+[control mapping](./control-mapping.md) section provides details on policies included within this
+initiative and how these policies help meet various controls defined by Media framework. When
+assigned to an architecture, resources are evaluated by Azure Policy for compliance with assigned
+policies.
## Next steps
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
# Control mapping of the PCI-DSS v3.2.1 blueprint sample The following article details how the Azure Blueprints PCI-DSS v3.2.1 blueprint sample maps to the
-PCI-DSS v3.2.1 controls. For more information about the controls, see [PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf).
+PCI-DSS v3.2.1 controls. For more information about the controls, see
+[PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf).
The following mappings are to the **PCI-DSS v3.2.1:2018** controls. Use the navigation on the right to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
within the information system.
## 3.4.a, 4.1, 4.1.g, 4.1.h and 6.5.3 Cryptographic Protection
-This blueprint helps you enforce your policy with the use of cryptograph controls by assigning [Azure Policy](../../../policy/overview.md)
-definitions which enforce specific cryptograph controls and audit use of weak cryptographic
-settings. Understanding where your Azure resources may have non-optimal cryptographic configurations
-can help you take corrective actions to ensure resources are configured in accordance with your
-information security policy. Specifically, the policies assigned by this blueprint require
-transparent data encryption on SQL databases; audit missing encryption on storage accounts, and
-automation account variables. There are also policies which address audit insecure connections to
-storage accounts, Function Apps, WebApp, API Apps, and Redis Cache, and audit unencrypted Service
-Fabric communication.
+This blueprint helps you enforce your policy with the use of cryptograph controls by assigning
+[Azure Policy](../../../policy/overview.md) definitions which enforce specific cryptograph controls
+and audit use of weak cryptographic settings. Understanding where your Azure resources may have
+non-optimal cryptographic configurations can help you take corrective actions to ensure resources
+are configured in accordance with your information security policy. Specifically, the policies
+assigned by this blueprint require transparent data encryption on SQL databases; audit missing
+encryption on storage accounts, and automation account variables. There are also policies which
+address audit insecure connections to storage accounts, Function Apps, WebApp, API Apps, and Redis
+Cache, and audit unencrypted Service Fabric communication.
- Function App should only be accessible over HTTPS - Web Application should only be accessible over HTTPS
owners for Azure subscriptions. Managing subscription owner permissions can help
appropriate separation of duties. - There should be more than one owner assigned to your subscription-- A maximum of 3 owners should be designated for your subscription
+- A maximum of 3 owners should be designated for your subscription
## 3.2, 7.2.1, 8.3.1.a and 8.3.1.b Management of Privileged Access Rights This blueprint helps you restrict and control privileged access rights by assigning [Azure Policy](../../../policy/overview.md) definitions to audit external accounts with owner, write and/or read permissions and employee accounts with owner and/or write permissions that don't have
-multi-factor authentication enabled. Azure role-based access control (Azure RBAC) helps to manage who
-has access to Azure resources. Understanding where custom Azure RBAC rules are implement can help you
-verify need and proper implementation, as custom Azure RBAC rules are error prone. This blueprint also
-assigns [Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active
-Directory authentication for SQL Servers. Using Azure Active Directory authentication simplifies
-permission management and centralizes identity management of database users and other Microsoft
+multi-factor authentication enabled. Azure role-based access control (Azure RBAC) helps to manage
+who has access to Azure resources. Understanding where custom Azure RBAC rules are implement can
+help you verify need and proper implementation, as custom Azure RBAC rules are error prone. This
+blueprint also assigns [Azure Policy](../../../policy/overview.md) definitions to audit use of Azure
+Active Directory authentication for SQL Servers. Using Azure Active Directory authentication
+simplifies permission management and centralizes identity management of database users and other
+Microsoft
services. - External accounts with owner permissions should be removed from your subscription
with elevated permissions.
## 8.1.3 Removal or Adjustment of Access Rights
-Azure role-based access control (Azure RBAC) helps you manage who has access to resources in
-Azure. Using Azure Active Directory and Azure RBAC, you can update user roles to reflect organizational
+Azure role-based access control (Azure RBAC) helps you manage who has access to resources in Azure.
+Using Azure Active Directory and Azure RBAC, you can update user roles to reflect organizational
changes. When needed, accounts can be blocked from signing in (or removed), which immediately removes access rights to Azure resources. This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions to audit depreciated account that should be
policy helps you take corrective actions to ensure passwords for all VM user acc
with policy. - \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of 70 days
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
+ 70 days
- \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password length to 14 characters
+- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
+ length to 14 characters
- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24 passwords
+- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
+ passwords
## 10.3 and 10.5.4 Audit Generation
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
provided to make each deployment of the copy of the blueprint sample unique.
- Lock Assignment
- Select the blueprint lock setting for your environment. For more information, see [blueprints resource locking](../../concepts/resource-locking.md).
+ Select the blueprint lock setting for your environment. For more information, see
+ [blueprints resource locking](../../concepts/resource-locking.md).
- Managed Identity
provided to make each deployment of the copy of the blueprint sample unique.
> [!WARNING] > The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
## Artifact parameters table
The following table provides a list of the blueprint artifact parameters:
|Artifact name|Artifact type|Parameter name|Description| |-|-|-|-|
-|PCI v3.2.1:2018|Policy Assignment|List of Resource Types | Audit diagnostic setting for selected resource types. Default value is all resources are selected|
-|Allowed locations|Policy Assignment|List Of Allowed Locations|List of data center locations allowed for any resource to be deployed into. This list is customizable to the desired Azure locations globally. Select locations you wish to allow.|
-|Allowed Locations for resource groups|Policy Assignment |Allowed Location |This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.|
-|Deploy Auditing on SQL servers|Policy Assignment|Retention days|Data retention in number of days. Default value is 180 but PCI requires 365.|
-|Deploy Auditing on SQL servers|Policy Assignment|Resource group name for storage account|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region).|
+|PCI v3.2.1:2018|Policy Assignment|List of Resource Types | Audit diagnostic setting for selected resource types. Default value is all resources are selected|
+|Allowed locations|Policy Assignment|List Of Allowed Locations|List of data center locations allowed for any resource to be deployed into. This list is customizable to the desired Azure locations globally. Select locations you wish to allow.|
+|Allowed Locations for resource groups|Policy Assignment |Allowed Location |This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.|
+|Deploy Auditing on SQL servers|Policy Assignment|Retention days|Data retention in number of days. Default value is 180 but PCI requires 365.|
+|Deploy Auditing on SQL servers|Policy Assignment|Resource group name for storage account|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region).|
## Next steps
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/control-mapping.md
vulnerabilities in your deployed resources.
- Advanced data security should be enabled on your SQL servers - Auditing on SQL server should be enabled - Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities on your SQL databases should be remediated
+- Vulnerabilities on your SQL databases should be remediated
- Vulnerabilities in security configuration on your machines should be remediated ## 1.3 Denial of Service Protection
exfiltration of information.
- Deploy Threat Detection on SQL servers > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial/control-mapping.md
Title: UK OFFICIAL & UK NHS blueprint sample controls description: Control mapping of the UK OFFICIAL and UK NHS blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Control mapping of the UK OFFICIAL and UK NHS blueprint samples
unrestricted access, allow list activity, and threats.
- Deploy Threat Detection on SQL servers - Deploy default Microsoft IaaSAntimalware extension for Windows Server
-## 9 Secure User Management
+## 9 Secure User Management
Azure role-based access control (Azure RBAC) helps you manage who has access to resources in Azure. Using the Azure portal, you can review who has access to Azure resources and their permissions. This
workspace.
- \[Preview\]: Deploy Log Analytics Agent for Windows VMs - Deploy network watcher when virtual networks are created - ## Next steps Now that you've reviewed the control mapping of the UK OFFICIAL and UK NHS blueprints, visit the
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial/deploy.md
Title: Deploy UK OFFICIAL & UK NHS blueprint samples description: Deploy steps for the UK OFFICIAL and UK NHS blueprint samples including blueprint artifact parameter details. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Deploy the UK OFFICIAL and UK NHS blueprint samples
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial/index.md
Title: UK OFFICIAL & UK NHS blueprint sample overview description: Overview and architecture of the UK OFFICIAL and UK NHS blueprint samples. This blueprint sample helps customers assess specific controls. Previously updated : 02/05/2021 Last updated : 04/30/2021 # Overview of the UK OFFICIAL and UK NHS blueprint samples
governance Create Management Group Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-dotnet.md
required packages.
using Microsoft.Rest; using Microsoft.Azure.Management.ManagementGroups; using Microsoft.Azure.Management.ManagementGroups.Models;
-
+ namespace mgCreate { class Program
required packages.
string strClientSecret = args[2]; string strGroupId = args[3]; string strDisplayName = args[4];
-
+ var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{strTenant}"); var authResult = await authContext.AcquireTokenAsync( "https://management.core.windows.net", new ClientCredential(strClientId, strClientSecret));
-
+ using (var client = new ManagementGroupsAPIClient(new TokenCredentials(authResult.AccessToken))) { var mgRequest = new CreateManagementGroupRequest
management group can hold subscriptions or other management groups.
To learn more about management groups and how to manage your resource hierarchy, continue to: > [!div class="nextstepaction"]
-> [Manage your resources with management groups](./manage.md)
+> [Manage your resources with management groups](./manage.md)
governance Create Management Group Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-go.md
that can create a management group.
1. Create the Go application and save the following source as `mgCreate.go`:
- ```Go
+ ```go
package main
-
+ import ( "context" "fmt" "os"
-
+ mg "github.com/Azure/azure-sdk-for-go/services/preview/resources/mgmt/2018-03-01-preview/managementgroups" "github.com/Azure/go-autorest/autorest/azure/auth" )
-
+ func main() { // Get variables from command line arguments var mgName = os.Args[1]
-
+ // Create and authorize a client mgClient := mg.NewClient() authorizer, err := auth.NewAuthorizerFromCLI()
that can create a management group.
} else { fmt.Printf(err.Error()) }
-
+ // Create the request Request := mg.CreateManagementGroupRequest{ Name: &mgName, }
-
+ // Run the query and get the results var results, queryErr = mgClient.CreateOrUpdate(context.Background(), mgName, Request, "no-cache") if queryErr == nil {
management group can hold subscriptions or other management groups.
To learn more about management groups and how to manage your resource hierarchy, continue to: > [!div class="nextstepaction"]
-> [Manage your resources with management groups](./manage.md)
+> [Manage your resources with management groups](./manage.md)
governance Create Management Group Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-python.md
Python can be used, including [bash on Windows 10](/windows/wsl/install-win10) o
```python # Import management group classes from azure.mgmt.managementgroups import ManagementGroupsAPI
-
+ # Import specific methods and models from other libraries from azure.common.credentials import get_azure_cli_credentials from azure.common.client_factory import get_client_from_cli_profile from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
-
+ # Wrap all the work in a function def createmanagementgroup( strName ): # Get your credentials from Azure CLI (development only!) and get your subscription list
Python can be used, including [bash on Windows 10](/windows/wsl/install-win10) o
subsList = [] for sub in subsRaw: subsList.append(sub.get('subscription_id'))
-
+ # Create management group client and set options mgClient = get_client_from_cli_profile(ManagementGroupsAPI) mg_request = {'name': strName, 'display_name': strName}
-
+ # Create management group mg = mgClient.management_groups.create_or_update(group_id=strName,create_management_group_request=mg_request)
-
+ # Show results print(mg)
-
+ createmanagementgroup("MyNewMG") ```
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/manage.md
subscriptions you might have. To learn more about management groups, see
> Azure Resource Manager user tokens and management group cache lasts for 30 minutes before they are > forced to refresh. After doing any action like moving a management group or subscription, it might > take up to 30 minutes to show. To see the updates sooner you need to update your token by
-> refreshing the browser, signing in and out, or requesting a new token.
+> refreshing the browser, signing in and out, or requesting a new token.
## Change the name of a management group
az account management-group delete --name 'Contoso'
## View management groups
-You can view any management group you have a direct or inherited Azure role on.
+You can view any management group you have a direct or inherited Azure role on.
### View in the portal
You can view any management group you have a direct or inherited Azure role on.
You use the Get-AzManagementGroup command to retrieve all groups. See [Az.Resources](/powershell/module/az.resources/Get-AzManagementGroup) modules for the full list of
-management group GET PowerShell commands.
+management group GET PowerShell commands.
```azurepowershell-interactive Get-AzManagementGroup
Get-AzManagementGroup -GroupName 'Contoso'
``` To return a specific management group and all the levels of the hierarchy under it, use **-Expand**
-and **-Recurse** parameters.
+and **-Recurse** parameters.
```azurepowershell-interactive PS C:\> $response = Get-AzManagementGroup -GroupName TestGroupParent -Expand -Recurse
Children :
### View in Azure CLI
-You use the list command to retrieve all groups.
+You use the list command to retrieve all groups.
```azurecli-interactive az account management-group list
and **-Recurse** parameters.
az account management-group show --name 'Contoso' -e -r ```
-## Moving management groups and subscriptions
+## Moving management groups and subscriptions
One reason to create a management group is to bundle subscriptions together. Only management groups and subscriptions can be made children of another management group. A subscription that moves to a
management group inherits all user access and policies from the parent managemen
When moving a management group or subscription to be a child of another management group, three rules need to be evaluated as true.
-If you're doing the move action, you need:
+If you're doing the move action, you need:
- Management group write and Role Assignment write permissions on the child subscription or management group.
To see what permissions you have in the Azure portal, select the management grou
**IAM**. To learn more on Azure roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-## Move subscriptions
+## Move subscriptions
### Add an existing Subscription to a management group in the portal
To see what permissions you have in the Azure portal, select the management grou
1. Select **All services** > **Management groups**.
-1. Select the management group you're planning that is the current parent.
+1. Select the management group you're planning that is the current parent.
1. Select the ellipse at the end of the row for the subscription in the list you want to move.
To see what permissions you have in the Azure portal, select the management grou
### Move subscriptions in PowerShell
-To move a subscription in PowerShell, you use the New-AzManagementGroupSubscription command.
+To move a subscription in PowerShell, you use the New-AzManagementGroupSubscription command.
```azurepowershell-interactive New-AzManagementGroupSubscription -GroupName 'Contoso' -SubscriptionId '12345678-1234-1234-1234-123456789012'
To move a subscription in CLI, you use the add command.
az account management-group subscription add --name 'Contoso' --subscription '12345678-1234-1234-1234-123456789012' ```
-To remove the subscription from the management group, use the subscription remove command.
+To remove the subscription from the management group, use the subscription remove command.
```azurecli-interactive az account management-group subscription remove --name 'Contoso' --subscription '12345678-1234-1234-1234-123456789012'
az account management-group subscription remove --name 'Contoso' --subscription
### Move subscriptions in ARM template
-To move a subscription in an Azure Resource Manager template (ARM template), use the following template.
+To move a subscription in an Azure Resource Manager template (ARM template), use the following
+template.
```json {
To move a subscription in an Azure Resource Manager template (ARM template), use
} ```
-## Move management groups
+## Move management groups
### Move management groups in the portal
To move a subscription in an Azure Resource Manager template (ARM template), use
- Selecting new will create a new management group. - Selecting an existing will present you with a drop-down of all the management groups you can
- move to this management group.
+ move to this management group.
:::image type="content" source="./media/add_context_MG.png" alt-text="Screenshot of the 'Add management group' options for creating a new management group." border="false":::
group.
```azurepowershell-interactive $parentGroup = Get-AzManagementGroup -GroupName ContosoIT Update-AzManagementGroup -GroupName 'Contoso' -ParentId $parentGroup.id
-```
+```
### Move management groups in Azure CLI
management groups looks like **"/providers/Microsoft.Management/managementGroups
## Referencing management groups from other Resource Providers When referencing management groups from other Resource Provider's actions, use the following path as
-the scope. This path is used when using PowerShell, Azure CLI, and REST APIs.
+the scope. This path is used when using PowerShell, Azure CLI, and REST APIs.
`/providers/Microsoft.Management/managementGroups/{yourMgID}`
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/overview.md
you can assign your own account as owner of the root management group.
root management group. See [Change the name of a management group](manage.md#change-the-name-of-a-management-group) to update the name of a management group.-- The root management group can't be moved or deleted, unlike other management groups.
+- The root management group can't be moved or deleted, unlike other management groups.
- All subscriptions and management groups fold up to the one root management group within the directory. - All resources in the directory fold up to the root management group for global management.
you can assign your own account as owner of the root management group.
- All Azure customers can see the root management group, but not all customers have access to manage that root management group. - Everyone who has access to a subscription can see the context of where that subscription is in
- the hierarchy.
+ the hierarchy.
- No one is given default access to the root management group. Azure AD Global Administrators are the only users that can elevate themselves to gain access. Once they have access to the root management group, the global administrators can assign any Azure role to other users to manage
you can assign your own account as owner of the root management group.
> [!IMPORTANT] > Any assignment of user access or policy assignment on the root management group **applies to all > resources within the directory**. Because of this, all customers should evaluate the need to have
-> items defined on this scope. User access and policy assignments should be "Must Have" only at this
+> items defined on this scope. User access and policy assignments should be "Must Have" only at this
> scope. ## Initial setup of management groups
There are two options you can do to resolve this issue.
subscriptions. If you have questions on this backfill process, contact: `managementgroups@microsoft.com`
-
+ ## Management group access Azure management groups support
Azure custom role support for management groups is currently in preview with som
[limitations](#limitations). You can define the management group scope in the Role Definition's assignable scope. That Azure custom role will then be available for assignment on that management group and any management group, subscription, resource group, or resource under it. This custom role
-will inherit down the hierarchy like any built-in role.
+will inherit down the hierarchy like any built-in role.
### Example definition
For example, let's look at a small section of a hierarchy for a visual.
:::image-end::: Let's say there's a custom role defined on the Marketing management group. That custom role is then
-assigned on the two free trial subscriptions.
+assigned on the two free trial subscriptions.
If we try to move one of those subscriptions to be a child of the Production management group, this move would break the path from subscription role assignment to the Marketing management group role definition. In this scenario, you'll receive an error saying the move isn't allowed since it will
-break this relationship.
+break this relationship.
There are a couple different options to fix this scenario: - Remove the role assignment from the subscription before moving the subscription to a new parent
There are a couple different options to fix this scenario:
- Add the subscription to the Role Definition's assignable scope. - Change the assignable scope within the role definition. In the above example, you can update the assignable scopes from Marketing to Root Management Group so that the definition can be reached by
- both branches of the hierarchy.
+ both branches of the hierarchy.
- Create another Custom Role that is defined in the other branch. This new role requires the role
- assignment to be changed on the subscription also.
+ assignment to be changed on the subscription also.
-### Limitations
+### Limitations
There are limitations that exist when using custom roles on management groups. - You can only define one management group in the assignable scopes of a new role. This limitation is in place to reduce the number of situations where role definitions and role assignments are disconnected. This situation happens when a subscription or management group with a role
- assignment moves to a different parent that doesn't have the role definition.
+ assignment moves to a different parent that doesn't have the role definition.
- Resource provider data plane actions can't be defined in management group custom roles. This restriction is in place as there's a latency issue with updating the data plane resource providers. This latency issue is being worked on and these actions will be disabled from the role
There are limitations that exist when using custom roles on management groups.
> information, see > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Moving management groups and subscriptions
+## Moving management groups and subscriptions
To move a management group or subscription to be a child of another management group, three rules need to be evaluated as true.
-If you're doing the move action, you need:
+If you're doing the move action, you need:
- Management group write and Role Assignment write permissions on the child subscription or management group.
governance Assign Policy Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-dotnet.md
required packages.
using Microsoft.Rest; using Microsoft.Azure.Management.ResourceManager; using Microsoft.Azure.Management.ResourceManager.Models;
-
+ namespace policyAssignment { class Program
required packages.
string strPolicyDefID = args[6]; string strDescription = args[7]; string strScope = args[8];
-
+ var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{strTenant}"); var authResult = await authContext.AcquireTokenAsync( "https://management.core.windows.net",
Now that your policy assignment is created, you can identify resources that aren
using Microsoft.Rest; using Microsoft.Azure.Management.PolicyInsights; using Microsoft.Azure.Management.PolicyInsights.Models;
-
+ namespace policyAssignment { class Program
Now that your policy assignment is created, you can identify resources that aren
string strClientSecret = args[2]; string strSubscriptionId = args[3]; string strName = args[4];
-
+ var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{strTenant}"); var authResult = await authContext.AcquireTokenAsync( "https://management.core.windows.net", new ClientCredential(strClientId, strClientSecret));
-
+ using (var client = new PolicyInsightsClient(new TokenCredentials(authResult.AccessToken))) { var policyQueryOptions = new QueryOptions
Now that your policy assignment is created, you can identify resources that aren
Filter = $"IsCompliant eq false and PolicyAssignmentId eq '{strName}'", Apply = "groupby(ResourceId)" };
-
+ var response = await client.PolicyStates.ListQueryResultsForSubscriptionAsync( "latest", strSubscriptionId, policyQueryOptions); Console.WriteLine(response.Odatacount);
governance Assign Policy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-javascript.md
identifies resources that aren't compliant to the conditions set in the policy d
const argv = require("yargs").argv; const authenticator = require("@azure/ms-rest-nodeauth"); const policyObjects = require("@azure/arm-policy");
-
+ if (argv.subID && argv.name && argv.displayName && argv.policyDefID && argv.scope && argv.description) {
-
+ const createAssignment = async () => { const credentials = await authenticator.interactiveLogin(); const client = new policyObjects.PolicyClient(credentials, argv.subID); const assignments = new policyObjects.PolicyAssignments(client);
-
+ const result = await assignments.create( argv.scope, argv.name,
identifies resources that aren't compliant to the conditions set in the policy d
); console.log(result); };
-
+ createAssignment(); } ```
Now that your policy assignment is created, you can identify resources that aren
const argv = require("yargs").argv; const authenticator = require("@azure/ms-rest-nodeauth"); const policyInsights = require("@azure/arm-policyinsights");
-
+ if (argv.subID && argv.name) {
-
+ const getStates = async () => {
-
+ const credentials = await authenticator.interactiveLogin(); const client = new policyInsights.PolicyInsightsClient(credentials); const policyStates = new policyInsights.PolicyStates(client);
Now that your policy assignment is created, you can identify resources that aren
); console.log(result); };
-
+ getStates(); } ```
To learn more about assigning policy definitions to validate that new resources
continue to the tutorial for: > [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-template.md
_Audit VMs that do not use managed disks_. For a partial list of available built
[Azure Policy samples](./samples/index.md). The template used in this quickstart is from
-[Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/quickstarts/microsoft.authorization/azurepolicy-assign-builtinpolicy-resourcegroup/).
+[Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azurepolicy-assign-builtinpolicy-resourcegroup/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.authorization/azurepolicy-assign-builtinpolicy-resourcegroup/azuredeploy.json":::
To learn more about assigning policies to validate that new resources are compli
tutorial for: > [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-terraform.md
for Azure Policy use the
version = "~>2.0" features {} }
-
+ resource "azurerm_policy_assignment" "auditvms" { name = "audit-vm-manageddisks" scope = var.cust_scope
for Azure Policy use the
display_name = "Audit VMs without managed disks Assignment" } ```+ 1. Create `variables.tf` with the following code: ```hcl
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
resources have a particular tag. Policy assignments are inherited by child resou
assignment is applied to a resource group, it's applicable to all the resources in that resource group.
-The policy definition _policyRule_ schema is found here: [https://schema.management.azure.com/schemas/2019-09-01/policyDefinition.json](https://schema.management.azure.com/schemas/2019-09-01/policyDefinition.json)
+The policy definition _policyRule_ schema is found here:
+[https://schema.management.azure.com/schemas/2019-09-01/policyDefinition.json](https://schema.management.azure.com/schemas/2019-09-01/policyDefinition.json)
You use JSON to create a policy definition. The policy definition contains elements for:
A parameter has the following properties that are used in the policy definition:
- `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given. Required when updating an existing policy definition that is assigned. - `allowedValues`: (Optional) Provides an array of values that the parameter accepts during
- assignment. Allowed value comparisons are case-sensitive.
+ assignment. Allowed value comparisons are case-sensitive.
As an example, you could define a policy definition to limit the locations where resources can be deployed. A parameter for that policy definition could be **allowedLocations**. This parameter would
Parameter value:
``` Policy:+ ```json { "count": {
The following functions are only available in policy rules:
- `policy()` - Returns the following information about the policy that is being evaluated. Properties can be accessed from the returned object (example: `[policy().assignmentId]`).
-
+ ```json { "assignmentId": "/subscriptions/ad404ddd-36a5-4ea8-b3e3-681e77487a63/providers/Microsoft.Authorization/policyAssignments/myAssignment",
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
condition and effect for each policy is independently evaluated. For example:
- Restricts resource location to 'eastus' - Assigned to resource group B in subscription A - Audit effect
-
+ This setup would result in the following outcome: - Any resource already in resource group B in 'eastus' is compliant to policy 2 and non-compliant to
to validate the right policy assignments are affecting the right scopes.
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Review what a management group is with
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
# Understand Azure Policy's Guest Configuration - Azure Policy can audit settings inside a machine, both for machines running in Azure and [Arc Connected Machines](../../../azure-arc/servers/overview.md). The validation is performed by the Guest Configuration extension and client. The extension, through the client, validates settings such
Connected Machines because it's included in the Arc Connected Machine agent.
> [!IMPORTANT] > The Guest Configuration extension and a managed identity is required to audit Azure virtual > machines. To deploy the extension at scale, assign the following policy initiative:
->
+>
> `Deploy prerequisites to enable Guest Configuration policies on virtual machines` ### Limits set on the extension
built-in content, Guest Configuration handles loading these tools automatically.
### Validation frequency
-The Guest Configuration client checks for new or changed guest assignments every 5 minutes. Once a guest assignment is
-received, the settings for that configuration are rechecked on a 15-minute interval. Results are
-sent to the Guest Configuration resource provider when the audit completes. When a policy
-[evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers) occurs, the state of the
-machine is written to the Guest Configuration resource provider. This update causes Azure Policy to
-evaluate the Azure Resource Manager properties. An on-demand Azure Policy evaluation retrieves the
-latest value from the Guest Configuration resource provider. However, it doesn't trigger a new audit
-of the configuration within the machine. The status is simultaneously written to Azure Resource Graph.
+The Guest Configuration client checks for new or changed guest assignments every 5 minutes. Once a
+guest assignment is received, the settings for that configuration are rechecked on a 15-minute
+interval. Results are sent to the Guest Configuration resource provider when the audit completes.
+When a policy [evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers) occurs, the
+state of the machine is written to the Guest Configuration resource provider. This update causes
+Azure Policy to evaluate the Azure Resource Manager properties. An on-demand Azure Policy evaluation
+retrieves the latest value from the Guest Configuration resource provider. However, it doesn't
+trigger a new audit of the configuration within the machine. The status is simultaneously written to
+Azure Resource Graph.
## Supported client types
Capture information from log files using
[Azure VM Run Command](../../../virtual-machines/linux/run-command.md), the following example Bash script can be helpful.
-```Bash
+```bash
linesToIncludeBeforeMatch=0 linesToIncludeAfterMatch=10 logPath=/var/lib/GuestConfig/gc_agent_logs/gc_agent.log
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/policy-for-kubernetes.md
aligns with how the add-on was installed:
Redeploy the cluster definition to AKS Engine after changing the **addons** property for _azure-policy_ to false: - ```json "addons": [{ "name": "azure-policy",
governance Regulatory Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/regulatory-compliance.md
To link a custom Regulatory Compliance initiative to your Azure Security Center
When an initiative definition has been created with [groups](./initiative-definition-structure.md#policy-definition-groups), the **Compliance** details
-page in portal for that initiative has additional information.
+page in portal for that initiative has additional information.
A new tab, **Controls** is added to the page. Filtering is available by **compliance domain** and policy definitions are grouped by the `title` field from the **policyMetadata** object. Each row
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/export-resources.md
To export a policy definition from Azure portal, follow these steps:
1. On the **Policies** tab, set the scope to search by selecting the ellipsis and picking a combination of management groups, subscriptions, or resource groups.
-
+ 1. Use the **Add policy definition(s)** button to search the scope for which objects to export. In the side window that opens, select each object to export. Filter the selection by the search box or the type. Once you've selected all objects to export, use the **Add** button at the bottom of
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/get-compliance-data.md
that you get the latest compliance status at a convenient time. Optionally, this
generate a report on the compliance state of scanned resources for further analysis or for archiving.
-The following example runs a compliance scan for a subscription.
+The following example runs a compliance scan for a subscription.
```yaml on:
- schedule:
+ schedule:
- cron: '0 8 * * *' # runs every morning 8am jobs: assess-policy-compliance: runs-on: ubuntu-latest
- steps:
+ steps:
- name: Login to Azure uses: azure/login@v1 with:
- creds: ${{secrets.AZURE_CREDENTIALS}}
+ creds: ${{secrets.AZURE_CREDENTIALS}}
-
- name: Check for resource compliance uses: azure/policy-compliance-scan@v0 with:
governance Guest Configuration Create Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-group-policy.md
machines.
> deploy the extension at scale across all Windows machines, assign the following policy > definitions: > - [Deploy prerequisites to enable Guest Configuration Policy on Windows VMs.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ecd903d-91e7-4726-83d3-a229d7f2e293)
->
+>
> Don't use secrets or confidential information in custom content packages. The DSC community has published the
Policy content. For details about using the BaselineManagement module, see the a
In this guide, we walk through the process to create an Azure Policy Guest Configuration package from a Group Policy Object (GPO). While the walkthrough outlines conversion of the Windows Server
-2019 Security Baseline, the same process can be applied to other GPOs.
+2019 Security Baseline, the same process can be applied to other GPOs.
## Download Windows Server 2019 Security Baseline and install related PowerShell modules
Guest Configuration and Baseline Management modules.
} New-GuestConfigurationPolicy @NewGuestConfigurationPolicySplat ```
-
+ 1. Publish the policy definitions using the `Publish-GuestConfigurationPolicy` cmdlet. The cmdlet only has the **Path** parameter that points to the location of the JSON files created by `New-GuestConfigurationPolicy`. To run the Publish command, you need access to create policy
governance Guest Configuration Create Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-linux.md
non-Azure machine.
> The Guest Configuration extension is required to perform audits in Azure virtual machines. To > deploy the extension at scale across all Linux machines, assign the following policy definition: > `Deploy prerequisites to enable Guest Configuration Policy on Linux VMs`
->
+>
> Don't use secrets or confidential information in custom content packages. ## Install the PowerShell module
Operating Systems where the module can be installed:
> The cmdlet `Test-GuestConfigurationPackage` requires OpenSSL version 1.0, due to a dependency on > OMI. This causes an error on any environment with OpenSSL 1.1 or later. >
-> Running the cmdlet `Test-GuestConfigurationPackage` is only supported on Windows
+> Running the cmdlet `Test-GuestConfigurationPackage` is only supported on Windows
> for Guest Configuration module version 2.1.0. The Guest Configuration resource module requires the following software:
required resource property. Create a YaML file and a Ruby script file, as detail
First, create the YaML file used by InSpec. The file provides basic information about the environment. An example is given below:
-```YaML
+```yaml
name: linux-path Title: Linux path maintainer: Test
Save this file with name `inspec.yml` to a folder named `linux-path` in your pro
Next, create the Ruby file with the InSpec language abstraction used to audit the machine.
-```Ruby
+```ruby
describe file('/tmp') do it { should exist } end
project.
Define the input in the Ruby file where you script what to audit on the machine. An example is given below.
-```Ruby
+```ruby
attr_path = attribute('path', description: 'The file path to validate.') describe file(attr_path) do
New-GuestConfigurationPolicy -ContentUri $uri `
-Version 1.0.0 ``` - ## Policy lifecycle If you would like to release an update to the policy, make the change for both the Guest
governance Guest Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create.md
non-Azure machine.
> The Guest Configuration extension is required to perform audits in Azure virtual machines. To > deploy the extension at scale across all Windows machines, assign the following policy > definitions: `Deploy prerequisites to enable Guest Configuration Policy on Windows VMs`
->
+>
> Don't use secrets or confidential information in custom content packages. ## Install the PowerShell module
The DSC resource requires custom development if a community solution doesn't alr
Community solutions can be discovered by searching the PowerShell Gallery for tag [GuestConfiguration](https://www.powershellgallery.com/packages?q=Tags%3A%22GuestConfiguration%22).
-> [!Note]
+> [!NOTE]
> Guest Configuration extensibility is a "bring your own > license" scenario. Ensure you have met the terms and conditions of any third > party tools before use.
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
initiative definition.
|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
This built-in initiative is deployed as part of the
||||| |[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
-### Manage identities securely and automatically
+### Manage identities securely and automatically
**ID**: Azure Security Benchmark 7.12 **Ownership**: Customer
This built-in initiative is deployed as part of the
|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
This built-in initiative is deployed as part of the
|[Ensure that 'HTTP Version' is the latest, if used to run the Web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c122334-9d20-4eb8-89ea-ac9a705b74ae) |Periodically, newer versions are released for HTTP either due to security flaws or to include additional functionality. Using the latest HTTP version for web apps to take advantage of security fixes, if any, and/or new functionalities of the newer version. Currently, this policy only applies to Linux web apps. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_HTTP_Latest.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-3-0.md
initiative definition.
|[FTPS should be required in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cmmc-l3.md
This built-in initiative is deployed as part of the
|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/hipaa-hitrust-9-2.md
This built-in initiative is deployed as part of the
|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/iso-27001.md
This built-in initiative is deployed as part of the
|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/new-zealand-ism.md
This built-in initiative is deployed as part of the
|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-171-r2.md
initiative definition.
|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r4.md
This built-in initiative is deployed as part of the
|[Microsoft Managed Control 1727 - Memory Protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F697175a7-9715-4e89-b98b-c6f605888fa3) |Microsoft implements this System and Information Integrity control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/MicrosoftManagedControl1727.json) | > [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
+> Availability of specific Azure Policy definitions may vary in Azure Government and other national
> clouds. ## Next steps
governance Pattern Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-deploy-resources.md
three core components:
template parameter sets the location of the new network watcher resource. :::code language="json" source="~/policy-templates/patterns/pattern-deploy-resources.json" range="30-44":::
-
+ - **parameters** - This property defines parameters that are provided to the **template**. The parameter names must match what are defined in **template**. In this example, the parameter is named **location** to match. The value of **location** uses the `field()` function again to get
three core components:
- Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance General https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/troubleshoot/general.md
operate as intended.
To troubleshoot your policy definition, do the following: 1. First, wait the appropriate amount of time for an evaluation to finish and compliance results
- to become available in Azure portal or SDK.
+ to become available in Azure portal or SDK.
1. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
To troubleshoot your policy definition, do the following:
of the definition to the evaluated property value indicates why a resource was noncompliant. - If the **target value** is wrong, revise the policy definition. - If the **current value** is wrong, validate the resource payload through `resources.azure.com`.
-1. For other common issues and solutions, see [Troubleshoot: Enforcement not as expected](#scenario-enforcement-not-as-expected).
+1. For other common issues and solutions, see
+ [Troubleshoot: Enforcement not as expected](#scenario-enforcement-not-as-expected).
If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly.
Activity log.
Troubleshoot your policy assignment's enforcement by doing the following:
-1. First, wait the appropriate amount of time for an evaluation to finish and compliance results
-to become available in the Azure portal or the SDK.
+1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to
+ become available in the Azure portal or the SDK.
-1. To start a new evaluation scan with Azure PowerShell
-or the REST API, see
-[On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
+1. To start a new evaluation scan with Azure PowerShell or the REST API, see
+ [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
1. Ensure that the assignment parameters and assignment scope are set correctly and that **enforcementMode** is _Enabled_. 1. Check the [policy definition mode](../concepts/definition-structure.md#mode):
or the REST API, see
1. Verify that the resource payload matches the policy logic. This can be done by [capturing an HTTP Archive (HAR) trace](../../../azure-portal/capture-browser-trace.md) or reviewing the Azure Resource Manager template (ARM template) properties.
-1. For other common issues and solutions, see [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected).
+1. For other common issues and solutions, see
+ [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected).
If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly.
governance Policy As Code Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/policy-as-code-github.md
To export a policy definition from Azure portal, follow these steps:
1. On the **Policies** tab, set the scope to search by selecting the ellipsis and picking a combination of management groups, subscriptions, or resource groups.
-
+ 1. Use the **Add policy definition(s)** button to search the scope for which objects to export. In the side window that opens, select each object to export. Filter the selection by the search box or the type. Once you've selected all objects to export, use the **Add** button at the bottom of
scheduled time to get the latest compliance status at a convenient time. Optiona
GitHub action can also generate a report on the compliance state of scanned resources for further analysis or for archiving.
-The following example runs a compliance scan for a subscription.
+The following example runs a compliance scan for a subscription.
```yaml on:
- schedule:
+ schedule:
- cron: '0 8 * * *' # runs every morning 8am jobs: assess-policy-compliance: runs-on: ubuntu-latest
- steps:
+ steps:
- name: Login to Azure uses: azure/login@v1 with:
- creds: ${{secrets.AZURE_CREDENTIALS}}
+ creds: ${{secrets.AZURE_CREDENTIALS}}
-
- name: Check for resource compliance uses: azure/policy-compliance-scan@v0 with: scopes: | /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx- ``` ## Review
In this tutorial, you successfully accomplished the following tasks:
To learn more about the structures of policy definitions, look at this article: > [!div class="nextstepaction"]
-> [Azure Policy definition structure](../concepts/definition-structure.md)
+> [Azure Policy definition structure](../concepts/definition-structure.md)
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/route-state-change-events.md
send the events to a web app that collects and displays the messages.
Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group) command.
+Create a resource group with the [az group create](/cli/azure/group) command.
The following example creates a resource group named `<resource_group_name>` in the _westus_ location. Replace `<resource_group_name>` with a unique name for your resource group.
policy state change events and what Event Grid can help you do:
- [Reacting to Azure Policy state change events](../concepts/event-overview.md) - [Azure Policy schema details for Event Grid](../../../event-grid/event-schema-policy.md)-- [About Event Grid](../../../event-grid/overview.md)
+- [About Event Grid](../../../event-grid/overview.md)
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
async Task ExecuteQueries(IEnumerable<string> queries)
var azureOperationResponse = await this.resourceGraphClient .ResourcesWithHttpMessagesAsync(userQueryRequest, header) .ConfigureAwait(false);
-
+ var responseHeaders = azureOperationResponse.response.Headers; int remainingQuota = /* read and parse x-ms-user-quota-remaining from responseHeaders */ TimeSpan resetAfter = /* read and parse x-ms-user-quota-resets-after from responseHeaders */
Provide these details:
- See the language in use in [Starter queries](../samples/starter.md). - See advanced uses in [Advanced queries](../samples/advanced.md).-- Learn more about how to [explore resources](explore-resources.md).
+- Learn more about how to [explore resources](explore-resources.md).
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/query-language.md
Some property names, such as those that include a `.` or `$`, must be wrapped or
query or the property name is interpreted incorrectly and doesn't provide the expected results. - `.` - Wrap the property name as such: `['propertyname.withaperiod']`
-
+ Example query that wraps the property _odata.type_: ```kusto
governance First Query Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-go.md
Type** of each resource.
1. Create the Go application and save the following source as `argQuery.go`:
- ```Go
+ ```go
package main
-
+ import ( "fmt" "os"
Type** of each resource.
arg "github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2019-04-01/resourcegraph" "github.com/Azure/go-autorest/autorest/azure/auth" )
-
+ func main() { // Get variables from command line arguments var query = os.Args[1] var subList = os.Args[2:]
-
+ // Create and authorize a ResourceGraph client argClient := arg.New() authorizer, err := auth.NewAuthorizerFromCLI()
Type** of each resource.
} else { fmt.Printf(err.Error()) }
-
+ // Set options RequestOptions := arg.QueryRequestOptions { ResultFormat: "objectArray", }
-
+ // Create the query request Request := arg.QueryRequest { Subscriptions: &subList, Query: &query, Options: &RequestOptions, }
-
+ // Run the query and get the results var results, queryErr = argClient.Resources(context.Background(), Request) if queryErr == nil {
first query. To learn more about the Resource Graph language, continue to the qu
page. > [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Get more information about the query language](./concepts/query-language.md)
governance First Query Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-java.md
install the required Maven packages.
```java package com.Fabrikam;
-
+ import java.util.Arrays; import java.util.List; import com.azure.core.management.AzureEnvironment;
install the required Maven packages.
import com.azure.resourcemanager.resourcegraph.models.QueryRequestOptions; import com.azure.resourcemanager.resourcegraph.models.QueryResponse; import com.azure.resourcemanager.resourcegraph.models.ResultFormat;
-
+ public class App { public static void main( String[] args ) { List<String> listSubscriptionIds = Arrays.asList(args[0]); String strQuery = args[1];
-
+ ResourceGraphManager manager = ResourceGraphManager.authenticate(new DefaultAzureCredentialBuilder().build(), new AzureProfile(AzureEnvironment.AZURE));
-
+ QueryRequest queryRequest = new QueryRequest() .withSubscriptions(listSubscriptionIds) .withQuery(strQuery);
-
+ QueryResponse response = manager.resourceProviders().resources(queryRequest);
-
+ System.out.println("Records: " + response.totalRecords()); System.out.println("Data:\n" + response.data()); }
packages and run your first query. To learn more about the Resource Graph langua
query language details page. > [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Get more information about the query language](./concepts/query-language.md)
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-powershell.md
top five results.
> subscriptions you have access to, one can set the PSDefaultParameterValues for `Search-AzGraph` > cmdlet by running > `$PSDefaultParameterValues=@{"Search-AzGraph:Subscription"= $(Get-AzSubscription).ID}`
-
+ ## Clean up resources If you wish to remove the Resource Graph module from your Azure PowerShell environment, you can do
governance First Query Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-python.md
Resource Graph query. The query returns the first five Azure resources with the
```python # Import Azure Resource Graph library import azure.mgmt.resourcegraph as arg
-
+ # Import specific methods and models from other libraries from azure.common.credentials import get_azure_cli_credentials from azure.common.client_factory import get_client_from_cli_profile from azure.mgmt.resource import SubscriptionClient
-
+ # Wrap all the work in a function def getresources( strQuery ): # Get your credentials from Azure CLI (development only!) and get your subscription list
Resource Graph query. The query returns the first five Azure resources with the
subsList = [] for sub in subsRaw: subsList.append(sub.get('subscription_id'))
-
+ # Create Azure Resource Graph client and set options argClient = get_client_from_cli_profile(arg.ResourceGraphClient) argQueryOptions = arg.models.QueryRequestOptions(result_format="objectArray")
-
+ # Create query argQuery = arg.models.QueryRequest(subscriptions=subsList, query=strQuery, options=argQueryOptions)
-
+ # Run query argResults = argClient.resources(argQuery)
-
+ # Show Python object print(argResults)
-
+ getresources("Resources | project name, type | limit 5") ```
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/overview.md
portal workflow. For more information, see
Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. The query is structured the same for each language. Learn how to enable Resource Graph with: -- [Azure portal and Resource Graph Explorer](./first-query-portal.md)
+- [Azure portal and Resource Graph Explorer](./first-query-portal.md)
- [Azure CLI](./first-query-azurecli.md#add-the-resource-graph-extension) - [Azure PowerShell](./first-query-powershell.md#add-the-resource-graph-module) - [Python](./first-query-python.md#add-the-resource-graph-library)
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/advanced.md
related to those network interfaces.
```kusto Resources | where type =~ 'microsoft.compute/virtualmachines'
-| extend nics=array_length(properties.networkProfile.networkInterfaces)
-| mv-expand nic=properties.networkProfile.networkInterfaces
-| where nics == 1 or nic.properties.primary =~ 'true' or isempty(nic)
-| project vmId = id, vmName = name, vmSize=tostring(properties.hardwareProfile.vmSize), nicId = tostring(nic.id)
+| extend nics=array_length(properties.networkProfile.networkInterfaces)
+| mv-expand nic=properties.networkProfile.networkInterfaces
+| where nics == 1 or nic.properties.primary =~ 'true' or isempty(nic)
+| project vmId = id, vmName = name, vmSize=tostring(properties.hardwareProfile.vmSize), nicId = tostring(nic.id)
| join kind=leftouter ( Resources | where type =~ 'microsoft.network/networkinterfaces'
- | extend ipConfigsCount=array_length(properties.ipConfigurations)
- | mv-expand ipconfig=properties.ipConfigurations
+ | extend ipConfigsCount=array_length(properties.ipConfigurations)
+ | mv-expand ipconfig=properties.ipConfigurations
| where ipConfigsCount == 1 or ipconfig.properties.primary =~ 'true' | project nicId = id, publicIpId = tostring(ipconfig.properties.publicIPAddress.id)) on nicId
Search-AzGraph -Query "ResourceContainers | where type=='microsoft.resources/sub
This query uses the [extended properties](../concepts/query-language.md#extended-properties) on virtual machines to summarize by power states. - ```kusto Resources | where type == 'microsoft.compute/virtualmachines'
Search-AzGraph -Query "GuestConfigurationResources | where properties.compliance
- See samples of [Starter queries](starter.md). - Learn more about the [query language](../concepts/query-language.md).-- Learn more about how to [explore resources](../concepts/explore-resources.md).
+- Learn more about how to [explore resources](../concepts/explore-resources.md).
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/starter.md
results from _Resources_, giving broad coverage to which tags are fetched. Last,
results to `distinct` paired data and excludes system-hidden tags. ```kusto
-ResourceContainers
+ResourceContainers
| where isnotempty(tags) | project tags | mvexpand tags
Search-AzGraph -Query "GuestConfigurationResources | extend vmid = split(propert
- Learn more about the [query language](../concepts/query-language.md). - Learn more about how to [explore resources](../concepts/explore-resources.md).-- See samples of [Advanced queries](advanced.md).
+- See samples of [Advanced queries](advanced.md).
governance Create Share Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/tutorials/create-share-query.md
follow these steps:
by OS** now appears in the **Query Name** list. When you select the title link of the saved query, it's loaded into a new tab with that query's name.
- > [!NOTE]
+ > [!NOTE]
> When a saved query is open and the tab shows its name, selecting the **Save** button > updates it with any changes that have been made. To create a new saved query from this open > query, select **Save as** and proceed as if you were saving a brand new query.
use it. To create a new Shared query, follow these steps:
| where type =~ 'Microsoft.Compute/virtualMachines' | summarize count() by tostring(properties.storageProfile.osDisk.osType) ```
-
+ Select **Run query** to see the query results in the bottom pane. For more information about this query, see
use it. To create a new Shared query, follow these steps:
1. Select **Save** at the bottom of the **Save query** pane. The tab title changes from **Query 1** to **Count VMs by OS**. The first time the **resource-graph-queries** resource group is used, the save takes longer than expected as the resource group gets created.
-
+ :::image type="content" source="../media/create-share-query/save-shared-query-window.png" alt-text="Save the new query as a Shared query" border="false":::
- > [!NOTE]
+ > [!NOTE]
> You can clear the **Publish to resource-graph-queries resource group** check box if you > want to provide the name of an existing resource group to save the shared query into. Using the > default named resource group for queries makes Shared queries easier to discover. It also makes
use it. To create a new Shared query, follow these steps:
:::image type="content" source="../media/create-share-query/show-saved-shared-query.png" alt-text="Show the Shared Query with icon" border="false":::
- > [!NOTE]
+ > [!NOTE]
> When a saved query is open and the tab shows its name, the **Save** button updates it > with any changes that have been made. To create a new saved query, select **Save as** and > proceed as if you were saving a brand new query.
In this tutorial, you've created Private and Shared queries. To learn more about
language, continue to the query language details page. > [!div class="nextstepaction"]
-> [Get more information about the query language](../concepts/query-language.md)
+> [Get more information about the query language](../concepts/query-language.md)
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-private-link.md
Previously updated : 03/03/2021 Last updated : 04/29/2021 # Configure private link
-Private link enables you to access Azure API for FHIR over a private endpoint, a network interface that connects you privately and securely using a private IP address from your virtual network. With private link, you can access our services securely from your Vnet as a first party service without having to go through a public DNS. This article walks you through how to create, test, and manage your private endpoint for Azure API for FHIR.
+Private link enables you to access Azure API for FHIR over a private endpoint, which is a network interface that connects you privately and securely using a private IP address from your virtual network. With private link, you can access our services securely from your VNet as a first party service without having to go through a public Domain Name System (DNS). This article describes how to create, test, and manage your private endpoint for Azure API for FHIR.
>[!Note]
->Neither Private Link nor Azure API for FHIR can be moved from one resource group or subscription to another once Private Link is enabled. To move, delete the Private Link first, then move Azure API for FHIR and create a new Private Link once the move is complete. Assess potential security ramifications before deleting Private Link.
+>Neither Private Link nor Azure API for FHIR can be moved from one resource group or subscription to another once Private Link is enabled. To make a move, delete the Private Link first, then move Azure API for FHIR. Create a new Private Link once the move is complete. Assess potential security ramifications before deleting Private Link.
>
->If exporting audit logs and/metrics is enabled for Azure API for FHIR, update the export setting through Diagnostic Settings from the portal.
+>If exporting audit logs and metrics is enabled for Azure API for FHIR, update the export setting through **Diagnostic Settings** from the portal.
## Prerequisites
-Before creating a private endpoint, there are some Azure resources that you will need to create first:
+Before creating a private endpoint, there are some Azure resources that you'll need to create first:
- Resource Group ΓÇô The Azure resource group that will contain the virtual network and private endpoint. - Azure API for FHIR ΓÇô The FHIR resource you would like to put behind a private endpoint. - Virtual Network ΓÇô The VNet to which your client services and Private Endpoint will be connected.
-For more information, check out the [Private Link Documentation](../../private-link/index.yml).
+For more information, see [Private Link Documentation](../../private-link/index.yml).
## Disable public network access
-Creating a private endpoint for your FHIR resource does not automatically disable public traffic to it. To do that you will have to update your FHIR resource to set a new ΓÇ£Public accessΓÇ¥ property from ΓÇ£EnabledΓÇ¥ to ΓÇ£DisabledΓÇ¥. Be careful when disabling public network access as all requests to your FHIR service that are not coming from a properly configured private endpoint will be denied. Only traffic from your private endpoints will be allowed.
+Creating a private endpoint for your FHIR resource doesn't automatically disable public traffic to it. To do this, you'll have to update your FHIR resource to set a new ΓÇ£Public accessΓÇ¥ property from ΓÇ£EnabledΓÇ¥ to ΓÇ£DisabledΓÇ¥. Be careful when disabling public network access as all requests to your FHIR service that are not coming from a properly configured private endpoint will be denied. Only traffic from your private endpoints will be allowed.
-![Disable Public Network Access](media/private-link/private-link-disable.png)
## Create private endpoint
-To create a private endpoint, a developer with RBAC permissions on the FHIR resource can use Azure portal, [Azure PowerShell](../../private-link/create-private-endpoint-powershell.md), or [Azure CLI](../../private-link/create-private-endpoint-cli.md). This article walks you through the steps on using Azure portal. Using Azure portal is recommended as it automates the creation and configuration of the Private DNS Zone. You can reference the [Private Link Quick Start Guides](../../private-link/create-private-endpoint-portal.md) for more details.
+To create a private endpoint, a developer with Role-based access control (RBAC) permissions on the FHIR resource can use the Azure portal, [Azure PowerShell](../../private-link/create-private-endpoint-powershell.md), or [Azure CLI](../../private-link/create-private-endpoint-cli.md). This article will guide you through the steps on using Azure portal. Using the Azure portal is recommended as it automates the creation and configuration of the Private DNS Zone. For more information, see [Private Link Quick Start Guides](../../private-link/create-private-endpoint-portal.md).
There are two ways to create a private endpoint. Auto Approval flow allows a user that has RBAC permissions on the FHIR resource to create a private endpoint without a need for approval. Manual Approval flow allows a user without permissions on the FHIR resource to request a private endpoint to be approved by owners of the FHIR resource. ### Auto approval
-Make sure the region for the new private endpoint is the same as the region for your Virtual Network. The region for your FHIR resource can be different.
+Ensure the region for the new private endpoint is the same as the region for your virtual network. The region for your FHIR resource can be different.
![Azure portal Basics Tab](media/private-link/private-link-portal2.png)
-For Resource Type, search and select "Microsoft.HealthcareApis/services". For Resource, select the FHIR resource. For target sub-resource, select "fhir".
+For the resource type, search and select **Microsoft.HealthcareApis/services**. For the resource, select the FHIR resource. For target sub-resource, select **FHIR**.
![Azure portal Resource Tab](media/private-link/private-link-portal1.png)
-If you do not have an existing Private DNS Zone set up, select "(New)privatelink.azurehealthcareapis.com". If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of "privatelink.azurehealthcareapis.com".
+If you do not have an existing Private DNS Zone set up, select **(New)privatelink.azurehealthcareapis.com**. If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of **privatelink.azurehealthcareapis.com**.
![Azure portal Configuration Tab](media/private-link/private-link-portal3.png)
-After the deployment is complete, you can go back to "Private endpoint connections" tab, on which you will see "Approved" as the connection state.
+After the deployment is complete, you can go back to **Private endpoint connections** tab of which you'll notice **Approved** as the connection state.
### Manual Approval
After the deployment is complete, you can go back to "Private endpoint connectio
## Test private endpoint
-To make sure that your FHIR server is not receiving public traffic after disabling public network access, try hitting the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden. Note that it can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
+To ensure that your FHIR server is not receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
-To make sure your private endpoint can send traffic to your server:
-1. Create a VM that is connected to the virtual network and subnet your private endpoint is configured on. To ensure your traffic from the VM is only using the private network, you can disable outbound internet traffic via NSG rule.
+> [!NOTE]
+> It can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
+
+To ensure your private endpoint can send traffic to your server:
+
+1. Create a virtual machine (VM) that is connected to the virtual network and subnet your private endpoint is configured on. To ensure your traffic from the VM is only using the private network, disable the outbound internet traffic using the network security group (NSG) rule.
2. RDP into the VM.
-3. Try hitting your FHIR serverΓÇÖs /metadata endpoint from the VM, you should receive the capability statement as a response.
+3. Access your FHIR serverΓÇÖs /metadata endpoint from the VM. You should receive the capability statement as a response.
## Manage private endpoint ### View
-Private Endpoints and the associated NIC are visible in Azure portal from the resource group they were created in.
+Private endpoints and the associated network interface controller (NIC) are visible in Azure portal from the resource group they were created in.
![View in resources](media/private-link/private-link-view.png) ### Delete
-Private endpoints can only be deleted from Azure portal via the Overview blade (as below) or via the Delete option under Networking (preview)'s "Private endpoint connections" tab. Clicking the delete button will delete the private endpoint and the associated NIC. If you delete all private endpoints to the FHIR resource and the public network access is disabled, no request will make it to your FHIR server.
+Private endpoints can only be deleted from the Azure portal from the **Overview** blade or by selecting the **Remove** option under the **Networking Private endpoint connections** tab. Selecting **Remove** will delete the private endpoint and the associated NIC. If you delete all private endpoints to the FHIR resource and the public network, access is disabled and no request will make it to your FHIR server.
![Delete Private Endpoint](media/private-link/private-link-delete.png)
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-faq.md
Previously updated : 1/21/2021 Last updated : 04/30/2021
When you run the FHIR Server for Azure, you have direct access to the underlying
For a development standpoint, every feature that doesn't apply only to the managed service is first deployed to the open-source Microsoft FHIR Server for Azure. Once it has been validated in open-source, it will be released to the PaaS Azure API for FHIR solution. The time between the release in open-source and PaaS depends on the complexity of the feature and other roadmap priorities. This is the same process for all of our services, such as Azure IoT Connector for FHIR (preview).
-### Where can I see what is releasing into the Azure API for FHIR?
-
-To see some of what is releasing into the Azure API for FHIR, please refer to the [release](https://github.com/microsoft/fhir-server/releases) of the open-source FHIR Server. Starting in November 2020, we have tagged items with Azure-API-for-FHIR if the open-source item will release to the managed service. These features are typically available two weeks after they are on the release page in open-source. We have also included instructions on how to test the build [here] (https://github.com/microsoft/fhir-server/blob/master/docs/Testing-Releases.md) if you would like to test in your own environment. We are evaluating how to best share additional managed service updates.
- ### In which regions is Azure API for FHIR Available? Currently, we have general availability for both public and government in [multiple geo-regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir&regions=non-regional,us-east,us-east-2,us-central,us-north-central,us-south-central,us-west-central,us-west,us-west-2,canada-east,canada-central,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For information about government cloud services at Microsoft, check out [Azure services by FedRAMP](../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
We allow you to load any valid FHIR JSON data into the server. If you want to st
### What is the limit on _count?
-The current limit on _count is 100. If you set _count to more than 100, you will receive a warning in the bundle that only 100 records will be shown.
+The current limit on _count is 1000. If you set _count to more than 1000, you'll receive a warning in the bundle that only 1000 records will be shown.
### Are there any limitations on the Group Export functionality?
iot-dps Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/virtual-network-support.md
In most scenarios where DPS is configured with a VNET, your IoT Hub will also be
## Introduction
-By default, DPS hostnames map to a public endpoint with a publicly routable IP address over the Internet. This public endpoint is visible to all customers. Access to the public endpoint can be attempted by IoT devices over wide-area networks as well as on-premises networks.
+By default, DPS hostnames map to a public endpoint with a publicly routable IP address over the Internet. This public endpoint is visible to all customers. Access to the public endpoint can be attempted by IoT devices over wide-area networks and on-premises networks.
For several reasons, customers may wish to restrict connectivity to Azure resources, like DPS. These reasons include:
-* Prevent connection exposure over the public Internet. Exposure can be reduced by introducing additional layers of security via network level isolation for your IoT hub and DPS resources
+* Prevent connection exposure over the public Internet. Exposure can be reduced by introducing more layers of security via network level isolation for your IoT hub and DPS resources
* Enabling a private connectivity experience from your on-premises network assets ensuring that your data and traffic is transmitted directly to Azure backbone network.
To set up a private endpoint, follow these steps:
6. Click **Review + create** and then **Create** to create your private endpoint resource.
+## Use private endpoints with devices
+
+To use private endpoints with device provisioning code, your provisioning code must use the specific **Service endpoint** for your DPS resource as shown on the overview page of your DPS resource in the [Azure portal](https://portal.azure.com). The service endpoint has the following form.
+
+`<Your DPS Tenant Name>.azure-devices-provisioning.net`
+
+Most sample code demonstrated in our documentation and SDKs, use the **Global device endpoint** (`global.azure-devices-provisioning.net`) and **ID Scope** to resolve a particular DPS resource. Use the service endpoint in place of the global device endpoint when connecting to a DPS resource using private links to provision your devices.
+
+For example, the provisioning device client sample ([pro_dev_client_sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/prov_dev_client_sample)) in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) is designed to use the **Global device endpoint** as the global provisioning URI (`global_prov_uri`) in [prov_dev_client_sample.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning_client/samples/prov_dev_client_sample/prov_dev_client_sample.c)
+++
+To use the sample with a private link, the highlighted code above would be changed to use the service endpoint for your DPS resource. For example, if you service endpoint was `mydps.azure-devices-provisioning.net`, the code would look as follows.
+
+```C
+static const char* global_prov_uri = "global.azure-devices-provisioning.net";
+static const char* service_uri = "mydps.azure-devices-provisioning.net";
+static const char* id_scope = "[ID Scope]";
+```
+
+```C
+ PROV_DEVICE_RESULT prov_device_result = PROV_DEVICE_RESULT_ERROR;
+ PROV_DEVICE_HANDLE prov_device_handle;
+ if ((prov_device_handle = Prov_Device_Create(service_uri, id_scope, prov_transport)) == NULL)
+ {
+ (void)printf("failed calling Prov_Device_Create\r\n");
+ }
+```
++ ## Request a private endpoint You can request a private endpoint to a DPS resource by resource ID. In order to make this request, you need the resource owner to supply you with the resource ID.
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
iotedge logs <container name>
You can also use a [direct method](how-to-retrieve-iot-edge-logs.md#upload-module-logs) call to a module on your device to upload the logs of that module to Azure Blob Storage.
+## Clean up container logs
+
+By default the Moby container engine does not set container log size limits. Over time this can lead to the device filling up with logs and running out of disk space. If large container logs are affecting your IoT Edge device performance, use the following command to force remove the container along with its related logs.
+
+If you're still troubleshooting, wait until after you've inspected the container logs to take this step.
+
+>[!WARNING]
+>If you force remove the edgeHub container while it has an undelivered message backlog and no [host storage](how-to-access-host-storage-from-module.md) set up, the undelivered messages will be lost.
+
+```cmd
+docker rm --force <container name>
+```
+
+For ongoing logs maintenance and production scenarios, [place limits on log size](production-checklist.md#place-limits-on-log-size).
+ ## View the messages going through the IoT Edge hub <!--1.1 -->
iot-hub Quickstart Send Telemetry C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-c.md
For this quickstart, you'll be using the [Azure IoT device SDK for C](iot-hub-de
For the following environments, you can use the SDK by installing these packages and libraries:
-* **Linux**: apt-get packages are available for Ubuntu 16.04 and 18.04 using the following CPU architectures: amd64, arm64, armhf, and i386. For more information, see [Using apt-get to create a C device client project on Ubuntu](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/ubuntu_apt-get_sample_setup.md).
+* **Linux**: apt-get packages are available for Ubuntu 16.04 and 18.04 using the following CPU architectures: amd64, arm64, armhf, and i386. For more information, see [Using apt-get to create a C device client project on Ubuntu](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md#set-up-a-linux-development-environment).
* **mbed**: For developers creating device applications on the mbed platform, we've published a library and samples that will get you started in minutes witH Azure IoT Hub. For more information, see [Use the mbed library](https://github.com/Azure/azure-iot-sdk-c/blob/master/iothub_client/readme.md#mbed).
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/howto-logging.md
Title: How to enable Azure Key Vault logging
+ Title: Enable Azure Key Vault logging
description: How to enable logging for Azure Key Vault, which saves information in an Azure storage account that you provide.
Last updated 10/01/2020
#Customer intent: As an Azure Key Vault administrator, I want to enable logging so I can monitor how my key vaults are accessed.
-# How to enable Key Vault logging
+# Enable Key Vault logging
-After you create one or more key vaults, you'll likely want to monitor how and when your key vaults are accessed, and by whom. For full details on the feature, see [Key Vault logging](logging.md).
+After you create one or more key vaults, you'll likely want to monitor how and when your key vaults are accessed, and by whom. For full details on the feature, see [Azure Key Vault logging](logging.md).
What is logged:
What is logged:
* Creating, modifying, or deleting these keys or secrets. * Signing, verifying, encrypting, decrypting, wrapping and unwrapping keys, getting secrets, and listing keys and secrets (and their versions). * Unauthenticated requests that result in a 401 response. Examples are requests that don't have a bearer token, that are malformed or expired, or that have an invalid token.
-* Event Grid notification events for near expiry, expired and vault access policy changed (new version event is not logged). Events are logged regardless if there is event subscription created on key vault. For more information see, [Event Grid event schema for Key Vault](../../event-grid/event-schema-key-vault.md)
+* Azure Event Grid notification events for the following conditions: expired, near expiration, and changed vault access policy (the new version event isn't logged). Events are logged even if there's an event subscription created on the key vault. For more information, see [Azure Key Vault as Event Grid source](../../event-grid/event-schema-key-vault.md).
## Prerequisites To complete this tutorial, you must have the following: * An existing key vault that you have been using.
-* [Azure Cloud Shell](https://shell.azure.com) - Bash environment
+* [Azure Cloud Shell](https://shell.azure.com) - Bash environment.
* Sufficient storage on Azure for your Key Vault logs.
-This guide commands are formatted for [Cloud Shell](https://shell.azure.com) with Bash as an environment.
+In this article, commands are formatted for [Cloud Shell](https://shell.azure.com) with Bash as an environment.
## Connect to your Key Vault subscription
-The first step in setting up key logging is connecting to subscription containing your key vault. This is especially important if you have multiple subscriptions associated with your account.
+The first step in setting up key logging is connecting to the subscription containing your key vault. This is especially important if you have multiple subscriptions associated with your account.
-With the Azure CLI, you can view all your subscriptions using the [az account list](/cli/azure/account#az_account_list) command, and then connect to one using [az account set](/cli/azure/account#az_account_set):
+With the Azure CLI, you can view all your subscriptions by using the [az account list](/cli/azure/account#az_account_list) command. Then you connect to one by using the [az account set](/cli/azure/account#az_account_set) command:
```azurecli-interactive az account list
az account list
az account set --subscription "<subscriptionID>" ```
-With Azure PowerShell, you can first list your subscriptions using the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet, and then connect to one using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet:
+With Azure PowerShell, you can first list your subscriptions by using the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet. Then you connect to one by using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet:
```powershell-interactive Get-AzSubscription
Set-AzContext -SubscriptionId "<subscriptionID>"
## Create a storage account for your logs
-Although you can use an existing storage account for your logs, we'll create a new storage account dedicated to Key Vault logs.
+Although you can use an existing storage account for your logs, here you create a new storage account dedicated to Key Vault logs.
-For additional ease of management, we'll also use the same resource group as the one that contains the key vault. In the [Azure CLI quickstart](quick-create-cli.md) and [Azure PowerShell quickstart](quick-create-powershell.md), this resource group is named **myResourceGroup**, and the location is *eastus*. Replace these values with your own, as applicable.
+For additional ease of management, you'll also use the same resource group as the one that contains the key vault. In the [Azure CLI quickstart](quick-create-cli.md) and [Azure PowerShell quickstart](quick-create-powershell.md), this resource group is named **myResourceGroup**, and the location is *eastus*. Replace these values with your own, as applicable.
-We will also need to provide a storage account name. Storage account names must be unique, between 3 and 24 characters in length, and use numbers and lower-case letters only. Lastly, we will be creating a storage account of the "Standard_LRS" SKU.
+You also need to provide a storage account name. Storage account names must be unique, between 3 and 24 characters in length, and use numbers and lowercase letters only. Lastly, you create a storage account of the `Standard_LRS` SKU.
With the Azure CLI, use the [az storage account create](/cli/azure/storage/account#az_storage_account_create) command.
With Azure PowerShell, use the [New-AzStorageAccount](/powershell/module/az.stor
New-AzStorageAccount -ResourceGroupName myResourceGroup -Name "<your-unique-storage-account-name>" -Type "Standard_LRS" -Location "eastus" ```
-In either case, note the "id" of the storage account. The Azure CLI operation returns the "id" in the output. To obtain the "id" with Azure PowerShell, use [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) and assigned the output to a the variable $sa. You can then see the storage account with $sa.id. (The "$sa.Context" property will also be used, later in this article.)
+In either case, note the ID of the storage account. The Azure CLI operation returns the ID in the output. To obtain the ID with Azure PowerShell, use [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount), and assign the output to the variable `$sa`. You can then see the storage account with `$sa.id`. (The `$sa.Context` property is also used later in this article.)
```powershell-interactive $sa = Get-AzStorageAccount -Name "<your-unique-storage-account-name>" -ResourceGroup "myResourceGroup" $sa.id ```
-The "id" of the storage account will be in the format "/subscriptions/<your-subscription-ID>/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/<your-unique-storage-account-name>".
+The ID of the storage account is in the following format: "/subscriptions/*your-subscription-ID*/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/*your-unique-storage-account-name*".
> [!NOTE]
-> If you decide to use an existing storage account, it must use the same subscription as your key vault, and it must use the Azure Resource Manager deployment model, rather than the classic deployment model.
+> If you decide to use an existing storage account, it must use the same subscription as your key vault. It must use the Azure Resource Manager deployment model, rather than the classic deployment model.
-## Obtain your key vault Resource ID
+## Obtain your key vault resource ID
-In the [CLI quickstart](quick-create-cli.md) and [PowerShell quickstart](quick-create-powershell.md), you created a key with a unique name. Use that name again in the steps below. If you cannot remember the name of your key vault, you can use the Azure CLI [az keyvault list](/cli/azure/keyvault#az_keyvault_list) command or the Azure PowerShell [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) cmdlet to list them.
+In the [CLI quickstart](quick-create-cli.md) and [PowerShell quickstart](quick-create-powershell.md), you created a key with a unique name. Use that name again in the following steps. If you can't remember the name of your key vault, you can use the Azure CLI [az keyvault list](/cli/azure/keyvault#az_keyvault_list) command, or the Azure PowerShell [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) cmdlet, to list them.
-Use the name of your key vault to find its Resource ID. With Azure CLI, use the [az keyvault show](/cli/azure/keyvault#az_keyvault_show) command.
+Use the name of your key vault to find its resource ID. With the Azure CLI, use the [az keyvault show](/cli/azure/keyvault#az_keyvault_show) command.
```azurecli-interactive az keyvault show --name "<your-unique-keyvault-name>"
With Azure PowerShell, use the [Get-AzKeyVault](/powershell/module/az.keyvault/g
Get-AzKeyVault -VaultName "<your-unique-keyvault-name>" ```
-The Resource ID for your key vault will be on the format "/subscriptions/<your-subscription-ID>/resourceGroups/myResourceGroup/providers/Microsoft.KeyVault/vaults/<your-unique-keyvault-name>". Note it for the next step.
+The resource ID for your key vault is in the following format: "/subscriptions/*your-subscription-ID*/resourceGroups/myResourceGroup/providers/Microsoft.KeyVault/vaults/*your-unique-keyvault-name*. Note it for the next step.
-## Enable Logging
+## Enable logging
-You can enable logging for Key Vault using the Azure CLI, Azure PowerShell, or the Azure portal.
+You can enable logging for Key Vault by using the Azure CLI, Azure PowerShell, or the Azure portal.
# [Azure CLI](#tab/azure-cli) ### Azure CLI
-Use the Azure CLI [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings) command together with the storage account ID and the key vault Resource ID.
+Use the Azure CLI [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings) command, the storage account ID, and the key vault resource ID, as follows:
```azurecli-interactive az monitor diagnostic-settings create --storage-account "<storage-account-id>" --resource "<key-vault-resource-id>" --name "Key vault logs" --logs '[{"category": "AuditEvent","enabled": true}]' --metrics '[{"category": "AllMetrics","enabled": true}]' ```
-Optionally, you can set a retention policy for your logs, so that older logs are automatically deleted after a specified amount of time. For example, you could set a retention policy that automatically deletes logs older than 90 days.
+Optionally, you can set a retention policy for your logs, so that older logs are automatically deleted after a specified amount of time. For example, you might set a retention policy that automatically deletes logs older than 90 days.
With the Azure CLI, use the [az monitor diagnostic-settings update](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_update) command.
az monitor diagnostic-settings update --name "Key vault retention policy" --reso
# [Azure PowerShell](#tab/azure-powershell)
-Use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet, with the **-Enabled** flag set to **$true** and the category set to `AuditEvent` (the only category for Key Vault logging):
+Use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet, with the `-Enabled` flag set to `$true` and the `category` set to `AuditEvent` (the only category for Key Vault logging):
```powershell-interactive Set-AzDiagnosticSetting -ResourceId "<key-vault-resource-id>" -StorageAccountId $sa.id -Enabled $true -Category "AuditEvent" ```
-Optionally, you can set a retention policy for your logs, so that older logs are automatically deleted after a specified amount of time. For example, you could set a retention policy that automatically deletes logs older than 90 days.
+Optionally, you can set a retention policy for your logs, so that older logs are automatically deleted after a specified amount of time. For example, you might set a retention policy that automatically deletes logs older than 90 days.
With Azure PowerShell, use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet.
Set-AzDiagnosticSetting "<key-vault-resource-id>" -StorageAccountId $sa.id -Enab
# [Azure portal](#tab/azure-portal)
-To configuring Diagnostic settings in the portal, follow these steps.
+To configure diagnostic settings in the Azure portal, follow these steps:
-1. Select the Diagnostic settings from the resource blade menu.
+1. From the **Resource** pane menu, select **Diagnostic settings**.
- :::image type="content" source="../media/diagnostics-portal-1.png" alt-text="Diagnostic Portal 1":::
+ :::image type="content" source="../media/diagnostics-portal-1.png" alt-text="Screenshot that shows how to select diagnostic settings.":::
-1. Click on the "+ Add diagnostic setting"
+1. Select **+ Add diagnostic setting**.
- :::image type="content" source="../media/diagnostics-portal-2.png" alt-text="Diagnostic Portal 2":::
+ :::image type="content" source="../media/diagnostics-portal-2.png" alt-text="Screenshot that shows adding a diagnostic setting.":::
-1. Select a name to call your diagnostic setting. To configure logging for Azure Monitor for Key Vault, select the "AuditEvent" option and "Send to Log Analytics workspace". Then choose the subscription and Log Analytics workspace that you want to send your logs.
+1. Select a name for your diagnostic setting. To configure logging for Azure Monitor for Key Vault, select **AuditEvent** and **Send to Log Analytics workspace**. Then choose the subscription and Log Analytics workspace to which you want to send your logs.
- :::image type="content" source="../media/diagnostics-portal-3.png" alt-text="Diagnostic Portal 3":::
+ :::image type="content" source="../media/diagnostics-portal-3.png" alt-text="Screenshot of diagnostic settings options.":::
- Otherwise, select the options that pertain to the logs that you wish to select
+ Otherwise, select the options that pertain to the logs that you want to select.
-1. Once you have selected your desired options, select save.
+1. When you have selected your desired options, select **Save**.
- :::image type="content" source="../media/diagnostics-portal-4.png" alt-text="Diagnostic Portal 4":::
+ :::image type="content" source="../media/diagnostics-portal-4.png" alt-text="Screenshot that shows how to save the options you selected.":::
## Access your logs
-Key Vault logs are stored in the "insights-logs-auditevent" container in the storage account that you provided. To view the logs, you have to download blobs.
+Your Key Vault logs are in the *insights-logs-auditevent* container in the storage account that you provided. To view the logs, you have to download blobs.
First, list all the blobs in the container. With the Azure CLI, use the [az storage blob list](/cli/azure/storage/blob#az_storage_blob_list) command.
First, list all the blobs in the container. With the Azure CLI, use the [az sto
az storage blob list --account-name "<your-unique-storage-account-name>" --container-name "insights-logs-auditevent" ```
-With Azure PowerShell, use the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) list all the blobs in this container, enter:
+With Azure PowerShell, use [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob). To list all the blobs in this container, enter:
```powershell Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context ```
-As you will see from the output of either the Azure CLI command or the Azure PowerShell cmdlet, the name of the blobs are in the format `resourceId=<ARM resource ID>/y=<year>/m=<month>/d=<day of month>/h=<hour>/m=<minute>/filename.json`. The date and time values use UTC.
+From the output of either the Azure CLI command or the Azure PowerShell cmdlet, you can see that the names of the blobs are in the following format: `resourceId=<ARM resource ID>/y=<year>/m=<month>/d=<day of month>/h=<hour>/m=<minute>/filename.json`. The date and time values use Coordinated Universal Time.
-Because you can use the same storage account to collect logs for multiple resources, the full resource ID in the blob name is useful to access or download just the blobs that you need. But before we do that, we'll first cover how to download all the blobs.
+Because you can use the same storage account to collect logs for multiple resources, the full resource ID in the blob name is useful to access or download just the blobs that you need.
-With the Azure CLI, use the [az storage blob download](/cli/azure/storage/blob#az_storage_blob_download) command, pass it the names of the blobs, and the path to the file where you wish to save the results.
+But first, download all the blobs. With the Azure CLI, use the [az storage blob download](/cli/azure/storage/blob#az_storage_blob_download) command, pass it the names of the blobs, and the path to the file where you want to save the results.
```azurecli-interactive az storage blob download --container-name "insights-logs-auditevent" --file <path-to-file> --name "<blob-name>" --account-name "<your-unique-storage-account-name>" ```
-With Azure PowerShell, use the [Gt-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob) cmdlet to get a list of the blobs, then pipe that to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet to download the logs to your chosen path.
+With Azure PowerShell, use the [Gt-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob) cmdlet to get a list of the blobs. Then pipe that list to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet to download the logs to your chosen path.
```powershell-interactive $blobs = Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context | Get-AzStorageBlobContent -Destination "<path-to-file>" ```
-When you run this second cmdlet in PowerShell, the **/** delimiter in the blob names creates a full folder structure under the destination folder. You'll use this structure to download and store the blobs as files.
+When you run this second cmdlet in PowerShell, the `/` delimiter in the blob names creates a full folder structure under the destination folder. You'll use this structure to download and store the blobs as files.
To selectively download blobs, use wildcards. For example:
To selectively download blobs, use wildcards. For example:
Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context -Blob '*/year=2016/m=01/*' ```
-You're now ready to start looking at what's in the logs. But before we move on to that, you should know two more commands:
-
-For details on how to read the logs, see [Key Vault logging: Interpret your Key Vault logs](logging.md#interpret-your-key-vault-logs)
- ## Use Azure Monitor logs You can use the Key Vault solution in Azure Monitor logs to review Key Vault `AuditEvent` logs. In Azure Monitor logs, you use log queries to analyze data and get the information you need.
For more information, including how to set this up, see [Azure Key Vault in Azur
## Next steps -- For conceptual information, including how to interpret Key Vault logs, see [Key Vault logging](logging.md)
+- For conceptual information, including how to interpret Key Vault logs, see [Key Vault logging](logging.md).
- For a tutorial that uses Azure Key Vault in a .NET web application, see [Use Azure Key Vault from a web application](tutorial-net-create-vault-azure-web-app.md).-- For programming references, see [the Azure Key Vault developer's guide](developers-guide.md).
+- For programming references, see [Azure Key Vault developer's guide](developers-guide.md).
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-compute-studio.md
Follow the previous steps to view the list of compute targets. Then use these st
:::image type="content" source="media/how-to-create-attach-studio/view-list.png" alt-text="View compute status from a list":::
-### Compute instance
+### <a name="compute-instance"></a> Compute instance
Use the [steps above](#portal-create) to create the compute instance. Then fill out the form as follows:
Use the [steps above](#portal-create) to create the compute instance. Then fill
|Virtual machine type | Choose CPU or GPU. This type cannot be changed after creation | |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) | |Enable/disable SSH access | SSH access is disabled by default. SSH access cannot be. changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md) |
-|Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. |
+|Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. Also use advanced settings to specify a [setup script](how-to-create-manage-compute-instance.md#setup-script). |
### <a name="amlcompute"></a> Compute clusters
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
-+
Compute instances can run jobs securely in a [virtual network environment](how-t
## Create
+> [!IMPORTANT]
+> Items marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ **Time estimate**: Approximately 5 minutes. Creating a compute instance is a one time process for your workspace. You can reuse the compute as a development workstation or as a compute target for training. You can have multiple compute instances attached to your workspace.
For information on creating a compute instance in the studio, see [Create comput
You can also create a compute instance with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance).
-### Create on behalf of (preview)
++
+## Create on behalf of (preview)
As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:+ * [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance). For details on how to find the TenantID and ObjectID needed in this template, see [Find identity object IDs for authentication configuration](../healthcare-apis/fhir/find-identity-object-ids.md). You can also find these values in the Azure Active Directory portal.+ * REST API The data scientist you create the compute instance for needs the following be [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) permissions:
The data scientist can start, stop, and restart the compute instance. They can u
* RStudio * Integrated notebooks
+## <a name="setup-script"></a> Customize the compute instance with a script (preview)
+
+> [!TIP]
+> This preview is currently available for workspaces in West Central US and East US regions.
+
+Use a setup script for an automated way to customize and configure the compute instance at provisioning time. As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
+
+Some examples of what you can do in a setup script:
+
+* Install packages and tools
+* Mount data
+* Create custom conda environment and Jupyter kernels
+* Clone git repositories
+
+### Create the setup script
+
+The setup script is a shell script which runs as *azureuser*. Create or upload the script into your **Notebooks** files:
+
+1. Sign into the [studio](https://ml.azure.com) and select your workspace.
+1. On the left, select **Notebooks**
+1. Use the **Add files** tool to create or upload your setup shell script. Make sure the script filename ends in ".sh". When you create a new file, also change the **File type** to *bash(.sh)*.
++
+When the script runs, the current working directory is the directory where it was uploaded. If you upload the script to **Users>admin**, the location of the the file is */mnt/batch/tasks/shared/LS_root/mounts/clusters/**ciname**/code/Users/admin* when provisioning the compute instance named **ciname**.
+
+Script arguments can be referred to in the script as $1, $2, etc. For example, if you execute `scriptname ciname` then in the script you can `cd /mnt/batch/tasks/shared/LS_root/mounts/clusters/$1/code/admin` to navigate to the directory where the script is stored.
+
+You can also retrieve the path inside the script:
+
+```shell
+#!/bin/bash
+SCRIPT=$(readlink -f "$0")
+SCRIPT_PATH=$(dirname "$SCRIPT")
+```
+
+### Use the script in the studio
+
+Once you store the script, specify it during creation of your compute instance:
+
+1. Sign into the [studio](https://ml.azureml.com) and select your workspace.
+1. On the left, select **Compute**.
+1. Select **+New** to create a new compute instance.
+1. [Fill out the form](how-to-create-attach-compute-studio.md#compute-instance).
+1. On the second page of the form, open **Show advanced settings**
+1. Turn on **Provision with setup script**
+1. Browse to the shell script you saved. Or upload a script from your computer.
+1. Add command arguments as needed.
++
+### Use script in a Resource Manager template
+
+In a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance), add `setupScripts` to invoke the setup script when the compute instance is provisioned. For example:
+
+```json
+"setupScripts":{
+ "scripts":{
+ "creationScript":{
+ "scriptSource":"workspaceStorage",
+ "scriptData":"[parameters('creationScript.location')]",
+ "scriptArguments":"[parameters('creationScript.cmdArguments')]"
+ }
+ }
+}
+```
+
+You could instead provide the script inline for a Resource Manager template. The shell command can refer to any dependencies uploaded into the notebooks file share. When you use an inline string, the working directory for the script is */mnt/batch/tasks/shared/LS_root/mounts/clusters/**ciname**/code/Users*.
+
+For example, specify a base64 encoded command string for `scriptData`:
+
+```json
+"setupScripts":{
+ "scripts":{
+ "creationScript":{
+ "scriptSource":"inline",
+ "scriptData":"[base64(parameters('inlineCommand'))]",
+ "scriptArguments":"[parameters('creationScript.cmdArguments')]"
+ }
+ }
+}
+```
+
+### Setup script logs
+
+Logs from the setup script execution appear in the logs folder in the compute instance details page. Logs are stored back to your notebooks file share under the Logs\<compute instance name> folder. Script file and command arguments for a particular compute instance are shown in the details page.
+ ## Manage Start, stop, restart, and delete a compute instance. A compute instance does not automatically scale down, so make sure to stop the resource to prevent ongoing charges.
machine-learning Explore Data Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/explore-data-blob.md
editor: marktab
Previously updated : 01/10/2020 Last updated : 04/30/2021
To explore and manipulate a dataset, it must first be downloaded from the blob s
1. Download the data from Azure blob with the following Python code sample using Blob service. Replace the variable in the following code with your specific values: ```python
- from azure.storage.blob import BlockBlobService
+ from azure.storage.blob import BlobServiceClient
import pandas as pd
- import tables
- STORAGEACCOUNTNAME= <storage_account_name>
+ STORAGEACCOUNTURL= <storage_account_url>
STORAGEACCOUNTKEY= <storage_account_key> LOCALFILENAME= <local_file_name> CONTAINERNAME= <container_name>
To explore and manipulate a dataset, it must first be downloaded from the blob s
#download from blob t1=time.time()
- blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
- blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
+ blob_service_client_instance = BlobServiceClient(account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY)
+ blob_client_instance = blob_service_client.get_blob_client(CONTAINERNAME, BLOBNAME, snapshot=None)
+ with open(LOCALFILENAME, "wb") as my_blob:
+ blob_data = blob_client_instance.download_blob()
+ blob_data.readinto(my_blob)
t2=time.time() print(("It takes %s seconds to download "+BLOBNAME) % (t2 - t1)) ```
To explore and manipulate a dataset, it must first be downloaded from the blob s
dataframe_blobdata = pd.read_csv(LOCALFILENAME) ```
-Now you are ready to explore the data and generate features on this dataset.
+If you need more general information on reading from an Azure Storage Blob, look at our documentation [Azure Storage Blobs client library for Python](https://docs.microsoft.com/python/api/overview/azure/storage-blob-readme?view=azure-python).
+
+Now you are ready to explore the data and generate features on this dataset.
## <a name="blob-dataexploration"></a>Examples of data exploration using pandas Here are a few examples of ways to explore data using pandas:
marketplace Create Saas Dev Test Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-saas-dev-test-offer.md
Previously updated : 03/25/2021 Last updated : 04/20/2021 # Create a test offer
To reduce your cost for testing the pricing models, including Marketplace custom
| | - | | $0.00 | Set a total transaction cost of zero to have no financial impact. Use this price when making calls to the metering APIs, or to test purchasing plans in your offer while developing your solution. | | $0.01 - $49.99 | Use this price range to test analytics, reporting, and the purchase process. |
-| $50.00 and above | Use this price range to test payout. For information about our payment schedule, see [Payout schedules and processes](/partner-center/payout-policy-details). |
+| $50.00 - $100.00 | Use this price range to test payout. For information about our payment schedule, see [Payout schedules and processes](/partner-center/payout-policy-details). |
|||
-To avoid being charged a store service fee on your test, open a [support ticket](support.md).
+> [!IMPORTANT]
+> To avoid being charged a store service fee on your test, open a [support ticket](support.md) within 7 days of the test purchase.
#### Free trial
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-plans.md
You need to copy the Service ID of each plan you created so you can map them to
## Add Service IDs to your solution package
-1. Add the Service IDs you copied in the previous step to your solution package. To learn how, see [Adding license metadata to your solution](https://go.microsoft.com/fwlink/?linkid=2162161&clcid=0x409) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
+1. Add the Service IDs you copied in the previous step to your solution package. To learn how, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
1. After you create the CRM package .zip file, upload it to Azure Blob Storage. You will need to provide the SAS URL of the Azure Blob Storage account that contains the uploaded CRM package .zip file. ## Next steps
marketplace Marketplace Faq Publisher Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-faq-publisher-guide.md
After you sign up and accept the Publisher Agreement, you'll have access to the
For more information, see [Welcome to the commercial marketplace](index.yml) and [Monetize your Microsoft 365 add-in through Microsoft Commercial Marketplace](/office/dev/store/monetize-addins-through-microsoft-commercial-marketplace).
+### How can my own employees use our offers from the marketplace without being charged?
+
+To prevent Microsoft from charging your employees and assessing the store service fee on the sale of your offer, you must first create a [private plan](/azure/marketplace/private-offers) for the offer with a $0 price and send this offer to the internal users who want to purchase it.
+
+You can also use our [Private Marketplace](/marketplace/create-manage-private-azure-marketplace) functionality to ensure internal users are only purchasing specific offers that are approved by your administrator.
+ ### How do I get support assistance for the commercial marketplace? To contact our marketplace publisher support team, you can [submit a support ticket](https://aka.ms/marketplacepublishersupport) from within Partner Center.
marketplace Third Party License https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/third-party-license.md
This table illustrates the high-level process to manage third-party apps through
| Step | Details | | | - |
-| Step 1: Create offer | The ISV creates an offer in Partner Center and chooses to manage licenses for this offer through Microsoft. This includes defining one or more licensing plans for the offer. |
-| Step 2: Update package | The ISV creates a solution package for the offer that includes license plan information as metadata, and uploads it to Partner Center for publication to Microsoft AppSource. To learn more, see [Adding license metadata to your solution](https://go.microsoft.com/fwlink/?linkid=2162161&clcid=0x409). |
+| Step 1: Create offer | The ISV creates an offer in Partner Center and chooses to manage licenses for this offer through Microsoft. This includes defining one or more licensing plans for the offer. For more information, see [Create a Dynamics 365 for Customer Engagement & Power Apps offer on Microsoft AppSource](dynamics-365-customer-engage-offer-setup.md). |
+| Step 2: Update package | The ISV creates a solution package for the offer that includes license plan information as metadata, and uploads it to Partner Center for publication to Microsoft AppSource. To learn more, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution). |
| Step 3: Purchase licenses | Customers discover the ISVΓÇÖs offer in AppSource or directly on the ISVΓÇÖs website. Customers purchase licenses for the plans they want directly from the ISV (these offers are cannot purchasable through AppSource at this time). | | Step 4: Register deal | The ISV registers the purchase with Microsoft in Partner Center. As part of [deal registration](/partner-center/csp-commercial-marketplace-licensing), the ISV will specify the type and quantity of each licensing plan purchased by the customer. |
-| Step 5: Manage licenses | The license plans will appear in Microsoft 365 Admin Center for the customer to assign to users or groups in their organization. The customer can also install the application in their tenant via the Power Platform Admin Center. |
+| Step 5: Manage licenses | The license plans will appear in Microsoft 365 Admin Center for the customer to [assign to users or groups](/microsoft-365/commerce/licenses/manage-third-party-app-licenses) in their organization. The customer can also install the application in their tenant via the Power Platform Admin Center. |
| Step 6: Perform license check | When a user within the customerΓÇÖs organization tries to run an application, Microsoft checks to ensure that user has a license before permitting them to run it. If they donΓÇÖt have a license, the user sees a message explaining that they need to contact an administrator for a license. |
-| Step 7: Report | ISVs can view information on provisioned and assigned licenses over a period of time and by geography. |
+| Step 7: View reports | ISVs can view information on provisioned and assigned licenses over a period of time and by geography. |
||| ## Enabling app license management through Microsoft
HereΓÇÖs how it works:
### Allow customers to install my app even if licenses are not assigned check box
-After you select the first box, the **Allow customers to install my app even if licenses are not assigned** box appears. Selecting this box enables customers to use the base features of the app without a license. If you choose this option, you need to configure your solution package to not require a license.
+After you select the first box, the **Allow customers to install my app even if licenses are not assigned** box appears. This option is useful if you are employing a ΓÇ£freemiumΓÇ¥ licensing strategy whereby you want to offer some basic features of your solution for free to all users and charge for premium features. Conversely, if you want to ensure that only tenants who currently own licenses for your product can download it from AppSource, then donΓÇÖt select this option.
+
+> [!NOTE]
+> If you choose this option, you need to configure your solution package to not require a license.
HereΓÇÖs how it works: - All AppSource users see the **Get it now** button on the offer listing page along with the **Contact me** button and will be permitted to download and install your offer. - If you do not select this option, then AppSource checks that the userΓÇÖs tenant has at least one license for your solution before showing the **Get it now** button. If there is no license in the userΓÇÖs tenant then only the **Contact Me** button is shown.
-This option is useful if you are employing a ΓÇ£freemiumΓÇ¥ licensing strategy whereby you want to offer some basic features of your solution for free to all users and charge for premium features. Conversely, if you want to ensure that only tenants who currently own licenses for your product can download it from AppSource, then donΓÇÖt select this option.
- For details about configuring an offer, see [How to create a Dynamics 365 for Customer Engagement & Power App offer](dynamics-365-customer-engage-offer-setup.md). ## Offer listing page on AppSource
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | - | - |
+| Offers | Publishers now have a simpler and faster way to prepare and publish their Azure Virtual Machine-based offers in Partner Center. To learn more, see [How to create a virtual machine using an approved base](azure-vm-create-using-approved-base.md). | 2021-03-22 |
| Analytics | Developers can use new report APIs to programmatically access commercial marketplace analytics data. You can schedule custom reports and download your marketplace data into your internal analytics systems. To learn more, see [Get started with programmatic access to analytics data](analytics-get-started.md). | 2021-03-08 | | Grow your business | Publishers can more easily mitigate the risk of their customers receiving an incorrect bill for metered billing usage. To learn more, see [Manage metered billing anomalies in Partner Center](anomaly-detection.md). | 2021-02-18 | ||||
Learn about important updates in the commercial marketplace program of Partner C
| Capabilities | Reorganized and clarified the [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md) documentation to help independent software vendors (ISVs) understand the difference between the various transactable and non-transactable options. | 2021-04-06 | | Policies | WeΓÇÖve updated the [Commercial marketplace certification policies](/legal/marketplace/certification-policies). | 2021-04-02 | | Offers | New guidance for publishers to test their software as a service (SaaS) offers by creating separate development and production offers. To learn more, see [Create a test offer (SaaS)](create-saas-dev-test-offer.md). | 2021-03-25 |
-| Offers | Publishers now have a simpler and faster way to prepare and publish their Azure Virtual Machine-based offers in Partner Center. To learn more, see [How to create a virtual machine using an approved base](azure-vm-create-using-approved-base.md). | 2021-03-22 |
| Co-sell | Improved documentation to help partners use the commercial marketplace to collaboratively sell (co-sell) their offers with Microsoft sales teams. To learn more, see the following topics:<ul><li>[Co-sell with Microsoft sales teams and partners overview](co-sell-overview.md)</li><li>[Co-sell requirements](co-sell-requirements.md)</li><li>[Configure co-sell for a commercial marketplace offer](co-sell-configure.md)</li><li>[Verify co-sell status of a commercial marketplace offer](co-sell-status.md)</li></ul> | 2021-03-17 | ||||
service-fabric Service Fabric Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-technical-overview.md
# Service Fabric terminology overview
-Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices. You can [host Service Fabric clusters anywhere](service-fabric-deploy-anywhere.md): Azure, in an on-premises datacenter, or on any cloud provider. Service Fabric is the orchestrator that powers [Azure Service Fabric Mesh](../service-fabric-mesh/index.yml). You can use any framework to write your services and choose where to run the application from multiple environment choices. This article details the terminology used by Service Fabric to understand the terms used in the documentation.
+Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices. Service Fabric is a container and process orchestrator that allows you to [host your clusters anywhere](service-fabric-deploy-anywhere.md): on Azure, in an on-premises datacenter, or on any cloud provider. You can use any framework to write your services and choose where to run the application from multiple environment choices. This article details the terminology used by Service Fabric to understand the terms used in the documentation.
## Infrastructure concepts
Azure Service Fabric is a distributed systems platform that makes it easy to pac
## Application and service concepts
-**Service Fabric Mesh Application**: Service Fabric Mesh Applications are described by the Resource Model (YAML and JSON resource files) and can be deployed to any environment where Service Fabric runs.
-
-**Service Fabric Native Application**: Service Fabric Native Applications are described by the Native Application Model (XML-based application and service manifests). Service Fabric Native Applications cannot run in Service Fabric Mesh.
-
-### Service Fabric Mesh Application concepts
-
-**Application**: An application is the unit of deployment, versioning, and lifetime of a Mesh application. The lifecycle of each application instance can be managed independently. Applications are composed of one or more service code packages and settings. An application is defined using the Azure Resource Model (RM) schema. Services are described as properties of the application resource in a RM template. Networks and volumes used by the application are referenced by the application. When creating an application, the application, service(s), network, and volume(s) are modeled using the Service Fabric Resource Model.
-
-**Service**: A service in an application represents a microservice and performs a complete and standalone function. Each service is composed of one or more code packages that describe everything needed to run the container image associated with the code package. The number of services in an application can be scaled up and down.
-
-**Network**: A network resource creates a private network for your applications and is independent of the applications or services that may refer to it. Multiple services from different applications can be part of the same network. Networks are deployable resources that are referenced by applications.
-
-**Code package**: Code packages describe everything needed to run the container image associated with the code package, including the following:
-
-* Container name, version, and registry
-* CPU and memory resources required for each container
-* Network endpoints
-* Volumes to mount in the container, referencing a separate volume resource.
-
-All the code packages defined as part of an application resource are deployed and activated together as a group.
-
-**Volume**: Volumes are directories that get mounted inside your container instances that you can use to persist state. The Azure Files volume driver mounts an Azure Files share to a container and provides reliable data storage through any API which supports file storage. Volumes are deployable resources that are referenced by applications.
+**Service Fabric Native Application**: Service Fabric Native Applications are described by the Native Application Model (XML-based application and service manifests).
### Service Fabric Native Application concepts
Read the [Deploy an application](service-fabric-deploy-remove-applications.md) a
To deploy your services, you need to describe how they should run. Service Fabric supports three different deployment models:
-### Resource model (preview)
-
-Service Fabric Resources are anything that can be deployed individually to Service Fabric; including applications, services, networks, and volumes. Resources are defined using a JSON file, which can be deployed to a cluster endpoint. For Service Fabric Mesh, the Azure Resource Model schema is used. A YAML file schema can also be used to more easily author definition files. Resources can be deployed anywhere Service Fabric runs. The resource model is the simplest way to describe your Service Fabric applications. Its main focus is on simple deployment and management of containerized services. To learn more, read [Introduction to the Service Fabric Resource Model](../service-fabric-mesh/service-fabric-mesh-service-fabric-resources.md).
- ### Native model The native application model provides your applications with full low-level access to Service Fabric. Applications and services are defined as registered types in XML manifest files.
-The native model supports the Reliable Services and Reliable Actors frameworks, which provides access to the Service Fabric runtime APIs and cluster management APIs in C# and Java. The native model also supports arbitrary containers and executables. The native model is not supported in the [Service Fabric Mesh environment](../service-fabric-mesh/service-fabric-mesh-overview.md).
+The native model supports the Reliable Services and Reliable Actors frameworks, which provides access to the Service Fabric runtime APIs and cluster management APIs in C# and Java. The native model also supports arbitrary containers and executables.
**Reliable Services**: An API to build stateless and stateful services. Stateful services store their state in Reliable Collections, such as a dictionary or a queue. You can also plug in various communication stacks, such as Web API and Windows Communication Foundation (WCF).
Read the [Choose a programming model for your service](service-fabric-choose-fra
Service Fabric is an open-source platform technology that several different services and products are based on. Microsoft provides the following options: - **Azure Service Fabric**: The Azure hosted Service Fabric cluster offering. It provides integration between Service Fabric and the Azure infrastructure, along with upgrade and configuration management of Service Fabric clusters. - **Service Fabric standalone**: A set of installation and configuration tools to [deploy Service Fabric clusters anywhere](./service-fabric-deploy-anywhere.md) (on-premises or on any cloud provider). Not managed by Azure. - **Service Fabric development cluster**: Provides a local development experience on Windows, Linux, or Mac for development of Service Fabric applications.
-## Environment, framework, and deployment model support matrix
-
-Different environments have different levels of support for frameworks and deployment models. The following table describes the supported framework and deployment model combinations.
-
-| Type of Application | Described By | Azure Service Fabric Mesh | Azure Service Fabric Clusters (any OS)| Local cluster | Standalone cluster |
-|||||||
-| Service Fabric Mesh Applications | Resource Model (YAML & JSON) | Supported |Not supported | Windows- supported, Linux and Mac- not supported | Windows- not supported |
-|Service Fabric Native Applications | Native Application Model (XML) | Not Supported| Supported|Supported|Windows- supported|
-
-The following table describes the different application models and the tooling that exists for them against Service Fabric.
-
-| Type of Application | Described By | Visual Studio | Eclipse | SFCTL | AZ CLI | PowerShell|
-||||||||
-| Service Fabric Mesh Applications | Resource Model (YAML & JSON) | VS 2017 |Not supported |Not supported | Supported - Mesh environment only | Not Supported|
-|Service Fabric Native Applications | Native Application Model (XML) | VS 2017 and VS 2015| Supported|Supported|Supported|Supported|
- ## Next steps To learn more about Service Fabric:
To learn more about Service Fabric:
* [Overview of Service Fabric](service-fabric-overview.md) * [Why a microservices approach to building applications?](service-fabric-overview-microservices.md) * [Application scenarios](service-fabric-application-scenarios.md)-
-To learn more about Service Fabric Mesh:
-
-* [Overview of Service Fabric Mesh](../service-fabric-mesh/service-fabric-mesh-overview.md)
spring-cloud Structured App Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/structured-app-log.md
To improve log query experience, an application log is required to be in JSON fo
{"timestamp":"2021-01-08T09:23:51.280Z","logger":"com.example.demo.HelloController","level":"ERROR","thread":"http-nio-1456-exec-4","mdc":{"traceId":"c84f8a897041f634","spanId":"c84f8a897041f634"},"stackTrace":"java.lang.RuntimeException: get an exception\r\n\tat com.example.demo.HelloController.throwEx(HelloController.java:54)\r\n\","message":"Got an exception","exceptionClass":"RuntimeException"} ```
+## Limitations
+
+Each line of JSON logs may have at most **16K bytes**. If the JSON output of a single log record exceeds this limit, it will be forcibly broken into multiple lines, and each raw line will be collected into the `Log` column, without being parsed structurally.
+
+Generally, this happens on exception logging with deep stacktrace, especially when the [AppInsights In-Process Agent](./how-to-application-insights.md) is enabled. Apply limit settings to the stacktrace output (see the below configuration samples) to ensure the final output gets parsed properly.
+ ## Generate schema-compliant JSON log For Spring applications, you can generate expected JSON log format using common [logging frameworks](https://docs.spring.io/spring-boot/docs/2.1.13.RELEASE/reference/html/boot-features-logging.html#boot-features-custom-log-configuration), such as [logback](http://logback.qos.ch/) and [log4j2](https://logging.apache.org/log4j/2.x/).
The procedure:
</nestedField> <stackTrace> <fieldName>stackTrace</fieldName>
+ <!-- maxLength - limit the length of the stack trace -->
+ <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
+ <maxDepthPerThrowable>200</maxDepthPerThrowable>
+ <maxLength>14000</maxLength>
+ <rootCauseFirst>true</rootCauseFirst>
+ </throwableConverter>
</stackTrace> <message /> <throwableClassName>
The procedure:
<configuration> <appenders> <console name="Console" target="SYSTEM_OUT">
- <JsonTemplateLayout eventTemplateUri="classpath:jsonTemplate.json" />
+ <!-- maxStringLength - limit the length of the stack trace -->
+ <JsonTemplateLayout eventTemplateUri="classpath:jsonTemplate.json" maxStringLength="14000" />
</console> </appenders> <loggers>
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
The article details the steps to configure a Point-to-Site VPN on Windows (Windo
- An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage accounts, which are management constructs that represent a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can learn more about how to deploy Azure file shares and storage accounts in [Create an Azure file share](storage-how-to-create-file-share.md). -- A private endpoint for the storage account containing the Azure file share you want to mount on-premises. To learn more about how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell).
+- A virtual network with a private endpoint for the storage account containing the Azure file share you want to mount on-premises. To learn more about how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell).
-## Deploy a virtual network
-To access your Azure file share and other Azure resources from on-premises via a Point-to-Site VPN, you must create a virtual network, or VNet. The P2S VPN connection you will automatically create is a bridge between your on-premises Windows machine and this Azure virtual network.
+## Collect environment information
+In order to set up the point-to-site VPN, we first need to collect some information about your environment for use throughout the guide. See the [prerequisites](#prerequisites) section if you have not already created a storage account, virtual network, and/or private endpoints.
-The following PowerShell will create an Azure virtual network with three subnets: one for your storage account's service endpoint, one for your storage account's private endpoint, which is required to access the storage account on-premises without creating custom routing for the public IP of the storage account that may change, and one for your virtual network gateway that provides the VPN service.
-
-Remember to replace `<region>`, `<resource-group>`, and `<desired-vnet-name>` with the appropriate values for your environment.
+Remember to replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment.
```PowerShell
-$region = "<region>"
-$resourceGroupName = "<resource-group>"
-$virtualNetworkName = "<desired-vnet-name>"
+$resourceGroupName = "<resource-group-name>"
+$virtualNetworkName = "<vnet-name>"
+$subnetName = "<subnet-name>"
+$storageAccountName = "<storage-account-name>"
-$virtualNetwork = New-AzVirtualNetwork `
- -ResourceGroupName $resourceGroupName `
- -Name $virtualNetworkName `
- -Location $region `
- -AddressPrefix "192.168.0.0/16"
-
-Add-AzVirtualNetworkSubnetConfig `
- -Name "ServiceEndpointSubnet" `
- -AddressPrefix "192.168.0.0/24" `
- -VirtualNetwork $virtualNetwork `
- -ServiceEndpoint "Microsoft.Storage" `
- -WarningAction SilentlyContinue | Out-Null
-
-Add-AzVirtualNetworkSubnetConfig `
- -Name "PrivateEndpointSubnet" `
- -AddressPrefix "192.168.1.0/24" `
- -VirtualNetwork $virtualNetwork `
- -WarningAction SilentlyContinue | Out-Null
-
-Add-AzVirtualNetworkSubnetConfig `
- -Name "GatewaySubnet" `
- -AddressPrefix "192.168.2.0/24" `
- -VirtualNetwork $virtualNetwork `
- -WarningAction SilentlyContinue | Out-Null
-
-$virtualNetwork | Set-AzVirtualNetwork | Out-Null
$virtualNetwork = Get-AzVirtualNetwork ` -ResourceGroupName $resourceGroupName ` -Name $virtualNetworkName
-$serviceEndpointSubnet = $virtualNetwork.Subnets | `
- Where-Object { $_.Name -eq "ServiceEndpointSubnet" }
-$privateEndpointSubnet = $virtualNetwork.Subnets | `
- Where-Object { $_.Name -eq "PrivateEndpointSubnet" }
-$gatewaySubnet = $virtualNetwork.Subnets | `
- Where-Object { $_.Name -eq "GatewaySubnet" }
+$subnetId = $virtualNetwork | `
+ Select-Object -ExpandProperty Subnets | `
+ Where-Object { $_.Name -eq "StorageAccountSubnet" } | `
+ Select-Object -ExpandProperty Id
+
+$storageAccount = Get-AzStorageAccount `
+ -ResourceGroupName $resourceGroupName `
+ -Name $storageAccountName
+
+$privateEndpoint = Get-AzPrivateEndpoint | `
+ Where-Object {
+ $subnets = $_ | `
+ Select-Object -ExpandProperty Subnet | `
+ Where-Object { $_.Id -eq $subnetId }
+
+ $connections = $_ | `
+ Select-Object -ExpandProperty PrivateLinkServiceConnections | `
+ Where-Object { $_.PrivateLinkServiceId -eq $storageAccount.Id }
+
+ $null -ne $subnets -and $null -ne $connections
+ } | `
+ Select-Object -First 1
``` ## Create root certificate for VPN authentication
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-netapp-comparison.md
Most workloads that require cloud file storage work well on either Azure Files o
| Category | Azure Files | Azure NetApp Files | ||-|| | Description | [Azure Files](https://azure.microsoft.com/services/storage/files/) is a fully managed, highly available, enterprise-grade service that is optimized for random access workloads with in-place data updates.<br><br> Azure Files is built on the same Azure storage platform as other services like Azure Blobs. | [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) is a fully managed, highly available, enterprise-grade NAS service that can handle the most demanding, high-performance, low-latency workloads requiring advanced data management capabilities. It enables the migration of workloads, which are deemed ΓÇ£un-migratableΓÇ¥ without.<br><br> ANF is built on NetAppΓÇÖs bare metal with ONTAP storage OS running inside the Azure datacenter for a consistent Azure experience and an on-premises like performance. |
-| Protocols | Premium<br><ul><li>SMB 2.1, 3.0</li><li>NFS 4.1 (preview)</li><li>REST</li></ul><br>Standard<br><ul><li>SMB 2.1, 3.0</li><li>REST</li></ul><br> To learn more, see [available file share protocols](./storage-files-compare-protocols.md). | All tiers<br><ul><li>SMB 1, 2.x, 3.x</li><li>NFS 3.0, 4.1</li><li>Dual protocol access (NFSv3/SMB)</li></ul><br> To learn more, see how to create [NFS](../../azure-netapp-files/azure-netapp-files-create-volumes.md), [SMB](../../azure-netapp-files/azure-netapp-files-create-volumes-smb.md), or [dual-protocol](../../azure-netapp-files/create-volumes-dual-protocol.md) volumes. |
+| Protocols | Premium<br><ul><li>SMB 2.1, 3.0, 3.1.1</li><li>NFS 4.1 (preview)</li><li>REST</li></ul><br>Standard<br><ul><li>SMB 2.1, 3.0, 3.1.1</li><li>REST</li></ul><br> To learn more, see [available file share protocols](./storage-files-compare-protocols.md). | All tiers<br><ul><li>SMB 1, 2.x, 3.x</li><li>NFS 3.0, 4.1</li><li>Dual protocol access (NFSv3/SMB)</li></ul><br> To learn more, see how to create [NFS](../../azure-netapp-files/azure-netapp-files-create-volumes.md), [SMB](../../azure-netapp-files/azure-netapp-files-create-volumes-smb.md), or [dual-protocol](../../azure-netapp-files/create-volumes-dual-protocol.md) volumes. |
| Region Availability | Premium<br><ul><li>30+ Regions</li></ul><br>Standard<br><ul><li>All regions</li></ul><br> To learn more, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). | All tiers<br><ul><li>25+ Regions</li></ul><br> To learn more, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). | | Redundancy | Premium<br><ul><li>LRS</li><li>ZRS</li></ul><br>Standard<br><ul><li>LRS</li><li>ZRS</li><li>GRS</li><li>GZRS</li></ul><br> To learn more, see [redundancy](./storage-files-planning.md#redundancy). | All tiers<br><ul><li>Built-in local HA</li><li>[Cross-region replication](../../azure-netapp-files/cross-region-replication-introduction.md)</li></ul> | | Service-Level Agreement (SLA)<br><br> Note that SLAs for Azure Files and Azure NetApp Files are calculated differently. | [SLA for Azure Files](https://azure.microsoft.com/support/legal/sla/storage/) | [SLA for Azure NetApp Files](https://azure.microsoft.com/support/legal/sla/netapp) |
Most workloads that require cloud file storage work well on either Azure Files o
| Category | Azure Files | Azure NetApp Files | ||||
-| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>1 GiB</li></ul> | All tiers<br><ul><li>100 GiB (Minimum capacity pool size: 4 TiB)</li></ul> |
-| Maximum Share/Volume Size | Premium<br><ul><li>100 TiB</li></ul><br>Standard<br><ul><li>100 TiB</li></ul> | All tiers<br><ul><li>100 TiB (500-TiB capacity pool limit)</li></ul><br>Up to 12.5 PiB per Azure NetApp account. |
+| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>No minimum.</li></ul> | All tiers<br><ul><li>100 GiB (Minimum capacity pool size: 4 TiB)</li></ul> |
+| Maximum Share/Volume Size | 100 TiB | All tiers<br><ul><li>100 TiB (500-TiB capacity pool limit)</li></ul><br>Up to 12.5 PiB per Azure NetApp account. |
| Maximum Share/Volume IOPS | Premium<br><ul><li>Up to 100k</li></ul><br>Standard<br><ul><li>Up to 10k</li></ul> | Ultra and Premium<br><ul><li>Up to 450k </li></ul><br>Standard<br><ul><li>Up to 320k</li></ul> | | Maximum Share/Volume Throughput | Premium<br><ul><li>Up to 10 GiB/s</li></ul><br>Standard<br><ul><li>Up to 300 MiB/s</li></ul> | Ultra and Premium<br><ul><li>Up to 4.5 GiB/s</li></ul><br>Standard<br><ul><li>Up to 3.2GiB/s</li></ul> |
-| Maximum File Size | Premium<br><ul><li>4 TiB</li></ul><br>Standard<br><ul><li>1 TiB</li></ul> | All tiers<br><ul><li>16 TiB</li></ul> |
+| Maximum File Size | 4 TiB | 16 TiB |
| Maximum IOPS Per File | Premium<br><ul><li>Up to 8,000</li></ul><br>Standard<br><ul><li>1,000</li></ul> | All tiers<br><ul><li>Up to volume limit</li></ul> | | Maximum Throughput Per File | Premium<br><ul><li>300 MiB/s (Up to 1 GiB/s with SMB multichannel)</li></ul><br>Standard<br><ul><li>60 MiB/s</li></ul> | All tiers<br><ul><li>Up to volume limit</li></ul> | | SMB Multichannel | Yes ([Preview](./storage-files-smb-multichannel-performance.md)) | Yes |
synapse-analytics How To Pause Resume Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/how-to-pause-resume-pipelines.md
+
+ Title: How to pause and resume dedicated SQL pools with Synapse Pipelines
+description: Learn to automate pause and resume for a dedicated SQL pool with Synapse Pipelines in Azure Synapse Analytics.
++++ Last updated : 02/05/2021+++
+# Pause and resume dedicated SQL pools with Synapse Pipelines
+
+Pause and resume for dedicated SQL pools can be automated using Synapse Pipelines in Azure Synapse Analytics. Pause and resume can be used to save costs for a dedicated SQL pool. This solution can easily be included in an existing data orchestration process.
+
+The following steps will guide you through setting up automated pause and resume.
+
+1. Create a pipeline.
+1. Set up parameters in your pipeline.
+1. Identify the list of dedicated SQL pools in your Synapse workspace.
+1. Filter any dedicated SQL pools that you don't want to pause or resume from the list.
+1. Loop over each dedicated SQL pool and:
+ 1. Check the state of the dedicated SQL pool.
+ 1. Evaluate the state of the dedicated SQL pool.
+ 1. Pause or resume the dedicated SQL pool.
+
+These steps are laid out in a simple pipeline in Synapse:
+
+![Simple Synapse pipeline](./media/how-to-pause-resume-pipelines/simple-pipeline.png)
++
+Depending upon the nature of your environment, the whole process described here may not apply, and you may just want to choose the appropriate steps. The process described here can be used to pause or resume all instances in a development, test, or PoC environment. For a production environment, you're more likely to schedule pause or resume on an instance by instance basis so will only need Steps 5a through 5c.
+
+The steps above use the REST APIs for Synapse and Azure SQL:
+
+- [Dedicated SQL pool operations](/rest/api/synapse/sqlpools)
+
+- [Azure SQL Database REST API](/rest/api/sql)
+
+Synapse Pipelines allow for the automation of pause and resume, but you can execute these commands on-demand via the tool or application of your choice.
+
+## Prerequisites
+
+- An existing [Azure Synapse workspace](../get-started-create-workspace.md)
+- At least one [dedicated SQL pool](../get-started-analyze-sql-pool.md)
+- Your workspace must be assigned the Azure contributor role. See [Grant Synapse administrators the Azure Contributor role on the workspace](https://review.docs.microsoft.com/en-us/azure/synapse-analytics/security/how-to-set-up-access-control?branch=pr-en-us-146232#step-5-grant-synapse-administrators-the-azure-contributor-role-on-the-workspace).
+
+## Step 1: Create a pipeline in Synapse Studio.
+1. Navigate to your workspace and open Synapse Studio.
+1. Select the **Integrate** icon, then select the **+** sign to create a new pipeline.
+1. Name your pipeline PauseResume.
+
+ ![Create a pipeline in Synapse Studio](./media/how-to-pause-resume-pipelines/create-pipeline.png)
+
+## Step 2: Create pipeline parameters
+
+The pipeline you'll create will be parameter driven. Parameters allow you to create a generic pipeline that you can use across multiple subscriptions, resource groups, or dedicated SQL pools. Select the **Parameters** tab near the bottom of the pipeline screen. Select **+New** to create each of the following parameters:
+
+
+|Name |Type |Default value |Description|
+||||--|
+|ResourceGroup |string |Synapse |Name of the resource group for your dedicated SQL pools|
+|SubscriptionID |string |<SubscriptionID> |Subscription ID for your resource group|
+|WorkspaceName |string |Synapse |Name of your workspace|
+|SQLPoolName |string |SQLPool1 |Name of your dedicated SQL pool|
+|PauseorResume |string |Pause |The state wanted at the end of the pipeline run|
+
+![Pipeline parameters in Synapse Studio.](./media/how-to-pause-resume-pipelines/pipeline-parameters.png)
+
+## Step 3: Create list of dedicated SQL pools
+
+ Set up a **Web** activity, you'll use this activity to create
+ the list of dedicated SQL pools by calling the dedicated SQL pools - List By Server REST API request. The output is a JSON string that contains a list of the dedicated SQL pools in your workspace. The JSON string is passed to the next activity.
+1. Under **Activities** > **General** drag a **Web** activity to the pipeline canvas as the first stage of your pipeline.
+1. In the **General** tab, name this stage GET List.
+1. Select the **Settings** tab then click in the **URL** entry space, then select **Add dynamic content**. Copy and paste the GET request that has been parameterized using the @concat string function below into the dynamic content box. Select **Finish**.
+The following code is a simple Get request:
+
+ ```HTTP
+ GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Synapse/workspaces/{workspace-name}/sqlPools?api-version=2019-06-01-preview
+ ```
+
+ GET request that has been parameterized using the @concat string function:
+
+ ```HTTP
+ @concat('https://management.azure.com/subscriptions/',pipeline().parameters.SubscriptionID,'/resourceGroups/',pipeline().parameters.ResourceGroup,'/providers/Microsoft.Synapse/workspaces/',pipeline().parameters.WorkspaceName,'/sqlPools?api-version=2019-06-01-preview')
+ ```
+1. Select the drop-down for **Method** and select **Get**.
+1. Select **Advanced** to expand the content. Select **MSI** as the Authentication type. For Resource enter `https://management.azure.com/`
+ > [!IMPORTANT]
+ > For all of the Web Activities / REST API Web calls, you need to ensure that Synapse Pipeline is authenticated against dedicated SQL pool. [Managed Identity](../../data-factory/control-flow-web-activity.md#managed-identity) is required to run these REST API calls.
+
+
+ ![Web activity list for dedicated SQL pools](./media/how-to-pause-resume-pipelines/web-activity-list-sql-pools.png)
+++
+## Step 4: Filter dedicated SQL pools
+Remove dedicated SQL pools that you don't want to pause or resume. Use a filter activity that filters the values passed from the Get list activity. In this example, we're extracting the records from the array that don't have "prod" in the name. Apply other conditions as needed. For example, filter on the sku/name of the Synapse workspace to ensure only valid dedicated SQL pools are identified.
+1. Select and drag the **Filter** activity under **Iteration & conditionals** to the pipeline canvas.
+![Filter dedicated SQL pools](./media/how-to-pause-resume-pipelines/filter-sql-pools.png)
+1. Connect the Get List Web activity to the Filter activity. Select the green tab on the Web activity and drag it to the Filter box.
+1. Enter `@activity('Get list').output.value` for **Items** where GET List is the name of the preceding Web activity
+1. Enter `@not(endswith(item().name,'prod'))` for **Condition**. The remaining records in the array are then passed to the next activity.
+
+## Step 5: Create a ForEach loop
+Create a ForEach activity to loop over each dedicated SQL pool.
+1. Select and drag the **ForEach** activity under **Iteration & conditionals** to the pipeline canvas.
+1. On the **General** tab name the activity, we have used 'ForEach_pool'.
+1. On the Settings tab, select the **Items** input and select **Add dynamic content**. Scroll to the **Activity outputs** and select the output from your filter activity. Add `.value` to the activity. the value should be similar to `@activity('Filter_PROD').output.value`. Select **finish**.
+ ![Loop through dedicated SQL pools](./media/how-to-pause-resume-pipelines/loop-through-sql-pools.png)
+1. Select the **Activities** tab and select the edit pencil to open the ForEach loop canvas.
+
+## Step 5a: Check the state of the dedicated SQL pools
+Checking the state of the dedicated SQL pool requires a Web Activity, similar to step 1. This activity calls the [Check dedicated SQL pool state REST API for Azure Synapse](../sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md#check-database-state).
+1. Select and drag a **Web** activity under **General** to the pipeline canvas.
+2. In the **General** tab, name this stage CheckState.
+3. Select the **Settings** tab.
+4. Click in the **URL** entry space, then select **Add dynamic content**. Copy and paste the GET request that has been parameterized using the @concat string function from below into the dynamic content box. Select **Finish**. Checking the state again uses a Get request using the following call:
+
+ ```HTTP
+ GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Synapse/workspaces/{workspace-name}/sqlPools/{database-name}?api-version=2019-06-01-preview HTTP/1.1
+ ```
+
+ The parameterize GET request using the @concat string function:
+
+ ```HTTP
+ @concat('https://management.azure.com/subscriptions/',pipeline().parameters.SubscriptionID,'/resourceGroups/',pipeline().parameters.ResourceGroup,'/providers/Microsoft.Synapse/workspaces/',pipeline().parameters.WorkspaceName,'/sqlPools/',item().name,'?api-version=2019-06-01-preview')
+ ```
+
+ In this case, we are using item().name, which is the name of the dedicated SQL pool from Step 1 that was passed to this activity from the ForEach loop. If you are using a pipeline to control a single dedicated SQL pool, you can embed the name of your dedicated SQL pool here, or use a parameter from the pipeline. For example, you could use pipeline().parameters.SQLPoolName.
+
+ The output is a JSON string that contains details of the dedicated SQL pool, including its status (in properties.status). The JSON string is passed to the next activity.
+1. Select the drop-down for **Method** and select **Get** Select **Advanced** to expand the content. Select **MSI** as the Authentication type. For Resource enter `https://management.azure.com/`
+
+![Check state of the dedicated SQL pool](./media/how-to-pause-resume-pipelines/check-sql-pool-state.png)
+
+
+
+## Step 5b: Evaluate the state of the dedicated SQL pools
+Evaluate the desired state, Pause or Resume, and the current status, Online, or Paused, and then initiate Pause or Resume as needed.
+
+1. Select and drag a **Switch** activity, under **Iteration & conditionals**, to the pipeline canvas.
+1. Connect the **Switch** activity to the **CheckState** activity. Select the green tab on the Web activity and drag it to the Switch box.
+1. In the **General** tab, name this stage State-PauseOrResume.
+
+ Based on the desired state and the current status, only the following two combinations will require a change in state: Paused->Resume or Online->Pause.
+
+1. On the **Activities** tab, copy the code below into the **Expression**.
+
+ ```HTTP
+ @concat(activity('CheckState').output.properties.status,'-',pipeline().parameters.PauseOrResume)
+ ```
+
+ Where Check State is the name of the preceding Web activity with output.properties.status defining the current status and pipeline().parameters.PauseOrResume indicates the desired state.
+
+ The check condition does a check of the desired state and the current status. If the desired state is Resume and the current status is Paused, a Resume Activity is invoked within the Paused-Resume Case. If the desired state is Pause and the current status is Online, a Pause Activity is invoked with the Online-Pause Case. Any other cases, such as a desired state of Pause and a current status of Paused, or a desired state of Resume and a current status of Online, would require no action and be handled by the Default case, which has no activities.
+1. On the Activities tab, select **+ Add Case**. Add the cases `Paused-Resume` and `Online-Pause`.
+ ![Check status condition of the dedicated SQL pool](./media/how-to-pause-resume-pipelines/check-condition.png)
+
+### Step 5c: Pause or Resume dedicated SQL pools
+
+The final and only relevant step for some requirements, is to initiate the pause or resume of your dedicated SQL pool. This step again uses a Web activity, calling the [Pause or Resume compute REST API for Azure Synapse](../sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md#pause-compute).
+1. Select the activity edit pencil and add a **Web** activity to the State-PauseorResume canvas.
+1. Select the **Settings** tab then click in the **URL** entry space, then select **Add dynamic content**. Copy and paste the POST request that has been parameterized using the @concat string function below into the dynamic content box. Select **Finish**.
+1. Select the drop-down for **Method** and select **POST**.
+1. In the Body section type "Pause and Resume"
+1. Select **Advanced** to expand the content. Select **MSI** as the Authentication type. For Resource enter `https://management.azure.com/`
+1. Add a second activity for the resume functionality using the parameterized code below.
+
+ ![Resume dedicated SQL pool](./media/how-to-pause-resume-pipelines/true-condition-resume.png)
+
+
+ The example here is to resume a dedicated SQL pool, invoking a POST request using the following call:
+
+ ```HTTP
+ POST https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Synapse/workspaces/{workspace-name}/sqlPools/{database-name}/resume?api-version=2019-06-01-preview HTTP/1.1
+ ```
+
+ You can parameterize the POST statement from above using the @concat string function:
+
+ ```HTTP
+ @concat('https://management.azure.com/subscriptions/',pipeline().parameters.SubscriptionID,'/resourceGroups/',pipeline().parameters.ResourceGroup,'/providers/Microsoft.Synapse/workspaces/',pipeline().parameters.WorkspaceName,'/sqlPools/',activity('CheckState').output.name,'/resume?api-version=2019-06-01-preview')
+ ```
+
+ In this case, we are using the activity 'Check State'.output.name with the names of the dedicated SQL pools from Step 3a that were passed to this activity through the Switch Condition. If you are using a single activity against a single database, you could embed the name of your dedicated SQL pool here, or use a parameter from the pipeline. For example, you could use the pipeline().parameters.DatabaseName.
+
+ The POST request to pause a dedicated SQL pool is:
+
+ ```HTTP
+ POST https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Synapse/workspaces/{workspace-name}/sqlPools/{database-name}/pause?api-version=2019-06-01-preview HTTP/1.1
+ ```
+
+ The POST request can be parameterized using the @concat string function as shown:
+
+ ```HTTP
+ @concat('https://management.azure.com/subscriptions/',pipeline().parameters.SubscriptionID,'/resourceGroups/',pipeline().parameters.ResourceGroup,'/providers/Microsoft.Synapse/workspaces/',pipeline().parameters.WorkspaceName,'/sqlPools/',activity('CheckState').output.name,'/pause?api-version=2019-06-01-preview')
+ ```
+
+## Pipeline run output
+
+When the full pipeline is run, you will see the output listed below. You can run your pipeline by selecting **Debug** mode or by selecting **add trigger**. For the pipeline results below, the pipeline parameter named "ResourceGroup" was set to a single resource group that had two Synapse Workspaces. One was named testprod and was filtered out, the second was named test1. The test1 dedicated SQL pool was paused, so the job initiated a resume.
+
+![Pipeline run output](./media/how-to-pause-resume-pipelines/pipeline-run-output.png)
+
+## Save your pipeline
+
+To save your pipeline, select **Publish all** above your pipeline.
+
+## Schedule your pause or resume pipeline to run
+
+To schedule your pipeline, select **Add trigger** at the top of your pipeline. Follow the screens to schedule your pipeline to run at a specified time.
+
+![Select trigger to set time for pipeline to run](./media/how-to-pause-resume-pipelines/trigger.png)
+
+## Next steps
+
+Further details on Managed Identity for Azure Synapse, and how Managed Identity is added to your dedicated SQL pool can be found here:
+
+[Azure Synapse workspace managed identity](../security/synapse-workspace-managed-identity.md)
+
+[Grant permissions to workspace managed identity](../security/how-to-grant-workspace-managed-identity-permissions.md)
+
+[SQL access control for Synapse pipeline runs](../security/how-to-set-up-access-control.md#step-73-sql-access-control-for-synapse-pipeline-runs)
+
synapse-analytics Query Parquet Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-parquet-files.md
Make sure that you can access this file. If your file is protected with SAS key
> Ensure you are using a UTF-8 database collation (for example `Latin1_General_100_BIN2_UTF8`) because string values in PARQUET files are encoded using UTF-8 encoding. > A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. > You can easily change the default collation of the current database using the following T-SQL statement:
-> `alter database current collate Latin1_General_100_BIN2_UTF8`
+> `alter database current collate Latin1_General_100_BIN2_UTF8`'
+
+If you use a _BIN2 collation you get an additional performance boost. BIN2 collation is compatible with parquet string sorting rules so we a some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning) can be eliminated. If you use a non-BIN2 collation all data from the parquet fill will be loaded into Synapse SQL with the filtering happening within the SQL process which might be much slower than with file elimination of the unneeded data. BIN2 collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.
### Data source usage
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/customize-rdp-properties.md
See [supported RDP file settings](/windows-server/remote/remote-desktop-services
RDP files have the following properties by default:
-|RDP property|On Desktop|As a RemoteApp|
-||||
-|Multi-monitor mode|Disabled|Enabled|
-|Drive redirections enabled|Drives, clipboard, printers, COM ports, and smart cards|Drives, clipboard, and printers|
-|Remote audio mode|Play locally|Play locally|
+|RDP property|For both Desktop and RemoteApp|
+|||
+|Multi-monitor mode|Disabled|
+|Drive redirections enabled|Drives, clipboard, printers, COM ports, smart cards, devices, and usbdevicestore|
+|Remote audio mode|Play locally|
+|VideoPlayback|Enabled|
+|EnableCredssp|Enabled|
+
+>[!NOTE]
+>Multi-monitor mode is only applicable for desktop app groups and will be ignored for RemoteApp app groups.
## Prerequisites
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/time-sync.md
Title: Time sync for Linux VMs in Azure description: Time sync for Linux virtual machines.- -
-tags: azure-resource-manager
Previously updated : 08/20/2020 Last updated : 04/30/2021
Azure hosts are synchronized to internal Microsoft time servers that take their
On stand-alone hardware, the Linux OS only reads the host hardware clock on boot. After that, the clock is maintained using the interrupt timer in the Linux kernel. In this configuration, the clock will drift over time. In newer Linux distributions on Azure, VMs can use the VMICTimeSync provider, included in the Linux integration services (LIS), to query for clock updates from the host more frequently.
-Virtual machine interactions with the host can also affect the clock. During [memory preserving maintenance](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot), VMs are paused for up to 30 seconds. For example, before maintenance begins the VM clock shows 10:00:00 AM and lasts 28 seconds. After the VM resumes, the clock on the VM would still show 10:00:00 AM, which would be 28 seconds off. To correct for this, the VMICTimeSync service monitors what is happening on the host and prompts for changes to happen on the VMs to compensate.
+Virtual machine interactions with the host can also affect the clock. During [memory preserving maintenance](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot), VMs are paused for up to 30 seconds. For example, before maintenance begins the VM clock shows 10:00:00 AM and lasts 28 seconds. After the VM resumes, the clock on the VM would still show 10:00:00 AM, which would be 28 seconds off. To correct for this, the VMICTimeSync service monitors what is happening on the host and updates the time-of-day clock in Linux VMs to compensate.
Without time synchronization working, the clock on the VM would accumulate errors. When there is only one VM, the effect might not be significant unless the workload requires highly accurate timekeeping. But in most cases, we have multiple, interconnected VMs that use time to track transactions and the time needs to be consistent throughout the entire deployment. When time between VMs is different, you could see the following effects:
Without time synchronization working, the clock on the VM would accumulate error
- If clock is off, the billing could be calculated incorrectly. - ## Configuration options
-There are generally three ways to configure time sync for your Linux VMs hosted in Azure:
--- The default configuration for Azure Marketplace images uses both NTP time and VMICTimeSync host-time. -- Host-only using VMICTimeSync.-- Use another, external time server with or without using VMICTimeSync host-time.--
-### Use the default
-
-By default, most Azure Marketplace images for Linux are configured to sync from two sources:
+Time sync requires that a time sync service be running in the Linux VM, plus a source of accurate time information against which to synchronize.
+Typically ntpd or chronyd is used as the time sync service, though there are other open source time sync services that can be used as well.
+The source of accurate time information can be the Azure host or an external time service that is accessed over the public internet.
+By itself, the VMICTimeSync service does not provide ongoing time sync between the Azure host and a Linux VM except after pauses for host maintenance as described above.
-- NTP as primary, which gets time from an NTP server. For example, Ubuntu 16.04 LTS Marketplace images use **ntp.ubuntu.com**.-- The VMICTimeSync service as secondary, used to communicate the host time to the VMs and make corrections after the VM is paused for maintenance. Azure hosts use Microsoft-owned Stratum 1 devices to keep accurate time.
+Historically, most Azure Marketplace images with Linux have been configured in one of two ways:
+- No time sync service is running by default
+- ntpd is running as the time sync service, and synchronizing against an external NTP time source that is accessed over the network. For example, Ubuntu 18.04 LTS Marketplace images use **ntp.ubuntu.com**.
-In newer Linux distributions, the VMICTimeSync service provides a Precision Time Protocol (PTP) hardware clock source, but earlier distributions may not provide this clock source and will fall-back to NTP for getting time from the host.
+To confirm ntpd is synchronizing correctly, run the `ntpq -p` command.
-To confirm NTP is synchronizing correctly, run the `ntpq -p` command.
+Starting in early calendar 2021, the most current Azure Marketplace images with Linux are being changed to use chronyd as the time sync service,
+and chronyd is configured to synchronize against the Azure host rather than an external NTP time source. The Azure host time is usually the best time source to synchronize
+against, as it is maintained very accurately and reliably, and is accessible without the variable network delays inherent in accessing an external NTP time source
+over the public internet.
-### Host-only
+The VMICTimeSync is used in parallel and provides two functions:
+- Immediately updates the Linux VM time-of-day clock after a host maintenance event
+- Instantiates an IEEE 1588 Precision Time Protocol (PTP) hardware clock source as a /dev/ptp device that provides the accurate time-of-day from the Azure host. Chronyd can be configured to synchronize against this time source (which is the default configuration in the newest Linux images). Linux distributions with kernel version 4.11 or later (or version 3.10.0-693 or later for RHEL/CentOS 7) support the /dev/ptp device. For earlier kernel versions that do not support /dev/ptp for Azure host time, only synchronization against an external time source is possible.
-Because NTP servers like time.windows.com and ntp.ubuntu.com are public, syncing time with them requires sending traffic over the internet. Varying packet delays can negatively affect quality of the time sync. Removing NTP by switching to host-only sync can sometimes improve your time sync results.
+Of course, the default configuration can be changed. An older image that is configured to use ntpd and an external time source can be changed to use chronyd and the /dev/ptp device for Azure host time. Similarly, an image using Azure host time via a /dev/ptp device can be configured to use an external NTP time source if required by your application or workload.
-Switching to host-only time sync makes sense if you experience time sync issues using the default configuration. Try out the host-only sync to see if that would improve the time sync on your VM.
-
-### External time server
-
-If you have specific time sync requirements, there is also an option of using external time servers. External time servers can provide specific time, which can be useful for test scenarios, ensuring time uniformity with machines hosted in non-Microsoft datacenters, or handling leap seconds in a special way.
-
-You can combine an external time server with the VMICTimeSync service to provide results similar to the default configuration. Combining an external time server with VMICTimeSync is the best option for dealing with issues that can be cause when VMs are paused for maintenance.
## Tools and resources
hv_utils 24418 0
hv_vmbus 397185 7 hv_balloon,hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc ```
-See if the Hyper-V integration services daemon is running.
-
-```bash
-ps -ef | grep hv
-```
-
-You should see something similar to this:
-
-```
-root 229 2 0 17:52 ? 00:00:00 [hv_vmbus_con]
-root 391 2 0 17:52 ? 00:00:00 [hv_balloon]
-```
-- ### Check for PTP Clock Source
-With newer versions of Linux, a Precision Time Protocol (PTP) clock source is available as part of the VMICTimeSync provider. On older versions of Red Hat Enterprise Linux or CentOS 7.x the [Linux Integration Services](https://github.com/LIS/lis-next) can be downloaded and used to install the updated driver. When the PTP clock source is available, the Linux device will be of the form /dev/ptp*x*.
+With newer versions of Linux, a Precision Time Protocol (PTP) clock source corresponding to the Azure host is available as part of the VMICTimeSync provider.
+On older versions of Red Hat Enterprise Linux or CentOS 7.x the [Linux Integration Services](https://github.com/LIS/lis-next) can be downloaded and used to
+install the updated driver. When the PTP clock source is available, the Linux device will be of the form /dev/ptp*x*.
See which PTP clock sources are available.
In this example, the value returned is *ptp0*, so we use that to check the clock
cat /sys/class/ptp/ptp0/clock_name ```
-This should return `hyperv`.
+This should return `hyperv`, meaning the Azure host.
+
+In Linux VMs with Accelerated Networking enabled, you may see multiple PTP devices listed because the Mellanox mlx5 driver also creates a /dev/ptp device.
+Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be /dev/ptp0 or it might be /dev/ptp1, which makes
+it difficult to configure chronyd with the correct clock source. To solve this problem, the most recent Linux images have a udev rule that creates the
+symlink /dev/ptp_hyperv to whichever /dev/ptp entry corresponds to the Azure host. Chrony should be configured to use this symlink instead of /dev/ptp0 or /dev/ptp1.
### chrony
-On Ubuntu 19.10 and later versions, Red Hat Enterprise Linux, and CentOS 8.x, [chrony](https://chrony.tuxfamily.org/) is configured to use a PTP source clock. Instead of chrony, older Linux releases use the Network Time Protocol daemon (ntpd), which doesn't support PTP sources. To enable PTP in those releases, chrony must be manually installed and configured (in chrony.conf) by using the following code:
+On Ubuntu 19.10 and later versions, Red Hat Enterprise Linux, and CentOS 8.x, [chrony](https://chrony.tuxfamily.org/) is configured to use a PTP source clock. Instead of chrony, older Linux releases use the Network Time Protocol daemon (ntpd), which doesn't support PTP sources. To enable PTP in those releases, chrony must be manually installed and configured (in chrony.conf) by using the following statement:
```bash refclock PHC /dev/ptp0 poll 3 dpoll -2 offset 0 ```
+Per above, if the /dev/ptp_hyperv symlink is available, use it instead of /dev/ptp0 to avoid any confusion with the /dev/ptp device created by the Mellanox mlx5 driver.
For more information about Ubuntu and NTP, see [Time Synchronization](https://ubuntu.com/server/docs/network-ntp).
virtual-machines Vm Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-support-help.md
+
+ Title: Azure Virtual Machine support and help options
+description: How to obtain help and support for questions or problems when you create solutions using Azure Virtual Machines.
++++ Last updated : 4/28/2021+++
+# Support and troubleshooting for Azure VMs
+
+Here are suggestions for where you can get help when developing your Azure Virtual Machines solutions.
+
+## Self help troubleshooting Content
+<div class='icon is-large'>
+ <img alt='Self help content' src='https://docs.microsoft.com/media//common/i_article.svg'>
+</div>
+
+Various articles explain how to determine, diagnose, and fix issues that you might encounter when using Azure Virtual Machines. Use these articles to troubleshoot deployment failures, unexpected restarts, connection issues and more.
+
+For a full list of self help troubleshooting content, see [Azure Virtual Machine troubleshooting documentation](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/welcome-virtual-machines)
++
+## Post a question on Microsoft Q&A
+
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/microsoft-logo.png'>
+</div>
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), AzureΓÇÖs preferred destination for community support.
+
+If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when asking your question:
++
+| Area | Tag |
+|-|-|
+| [Azure Virtual Machines](./linux/overview.md) | [azure-virtual-machines](/answers/topics/azure-virtual-machines.html) |
+| [Azure SQL Virtual Machines](https://docs.microsoft.com/azure/azure-sql/virtual-machines/) | [azure-sql-virtual-machines](/answers/topics/azure-sql-virtual-machines.html)|
+| [Azure Virtual Machine backup](backup-recovery.md) | [azure-virtual-machine-backup](/answers/topics/azure-virtual-machine-backup.html) |
+| [Azure Virtual Machine extension](./extensions/overview.md) | [azure-virtual-machine-extension](/answers/topics/azure-virtual-machine-extension.html)|
+| [Azure Virtual Machine Images](shared-image-galleries.md) | [azure-virtual-machine-images](/answers/topics/azure-virtual-machine-images.html) |
+| [Azure Virtual Machine migration](classic-vm-deprecation.md) | [azure-virtual-machine-migration](/answers/topics/azure-virtual-machine-migration.html) |
+| [Azure Virtual Machine monitoring](../azure-monitor/vm/monitor-vm-azure.md) | [azure-virtual-machine-monitoring](/answers/topics/azure-virtual-machine-monitoring.html) |
+| [Azure Virtual Machine networking](network-overview.md) | [azure-virtual-machine-networking](/answers/topics/azure-virtual-machine-networking.html) |
+| [Azure Virtual Machine storage](managed-disks-overview.md) | [azure-virtual-machine-storage](/answers/topics/azure-virtual-machine-storage.html) |
+| [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) | [azure-virtual-machine-scale-set](/answers/topics/azure-virtual-machine-scale-set.html) |
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.
++
+## Create a GitHub issue
+
+<div class='icon is-large'>
+ <img alt='GitHub-image' src='../active-directory/develop/media/common/github.svg'>
+</div>
+
+If you need help with the language and tools used to develop and manage Azure Virtual Machines, open an issue in its repository on GitHub.
+
+| Library | GitHub issues URL|
+| | |
+| Azure PowerShell | https://github.com/Azure/azure-powershell/issues |
+| Azure CLI | https://github.com/Azure/azure-cli/issues |
+| Azure REST API | https://github.com/Azure/azure-rest-api-specs/issues |
+| Azure SDK for Java | https://github.com/Azure/azure-sdk-for-java/issues |
+| Azure SDK for Python | https://github.com/Azure/azure-sdk-for-python/issues |
+| Azure SDK for .NET | https://github.com/Azure/azure-sdk-for-net/issues |
+| Azure SDK for JavaScript | https://github.com/Azure/azure-sdk-for-js/issues |
+| Jenkins | https://github.com/Azure/jenkins/issues |
+| Terraform | https://github.com/Azure/terraform/issues |
+| Ansible | https://github.com/Azure/Ansible/issues |
+++
+## Submit feature requests on Azure Feedback
+
+<div class='icon is-large'>
+ <img alt='UserVoice' src='https://docs.microsoft.com/media/logos/logo-uservoice.svg'>
+</div>
+
+To request new features, post them on Azure Feedback. Share your ideas for improving Azure Virtual Machines.
+
+| Service | Azure Feedback URL |
+|-||
+| Azure Virtual Machines | https://feedback.azure.com/forums/216843-virtual-machines
+
+## Stay informed of updates and new releases
+
+<div class='icon is-large'>
+ <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+</div>
+
+Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
+
+News and information about Azure Virtual Machines is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
++
+## Next steps
+
+Learn more about [Azure Virtual Machines](https://docs.microsoft.com/azure/virtual-machines/)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 04/27/2021 Last updated : 04/30/2021
In this section, you can find information in how to configure SSO with most of t
In this section, you find documents about Microsoft Power BI integration into SAP data sources as well as Azure Data Factory integration into SAP BW. ## Change Log
+- April 30, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to include warning about incompatible change with Azure Fence Agent in a version of package python3-azure-mgmt-compute (SLES 15)
- April 27, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md) to add links to important SAP notes in the prerequisites section - April 27, 2021: Added new Msv2, Mdsv2 VMs into HANA storage configuration in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - April 27, 2021: Added requirement for using same storage types in HANA System Replication across all VMs of HSR configuration in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 02/03/2020 Last updated : 04/30/2021
The following items are prefixed with either **[A]** - applicable to all nodes,
>You can check the extension, by running SUSEConnect list-extensions. >To achieve the faster failover times with Azure Fence Agent: > - on SLES 12 SP4 or SLES 12 SP5 install version **4.6.2** or higher of package python-azure-mgmt-compute
- > - on SLES 15 install version **4.6.2** or higher of package python**3**-azure-mgmt-compute
-
+ > - on SLES 15.X install version **4.6.2** of package python**3**-azure-mgmt-compute, but not higher. Avoid version 17.0.0-6.7.1 of package python**3**-azure-mgmt-compute, as it contains changes incompatible with Azure Fence Agent
+
1. **[A]** Setup host name resolution You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.