Updates from: 07/13/2021 03:04:16
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 06/02/2021 Last updated : 07/12/2021
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## June 2021
+
+### New articles
+
+- [Enable authentication in your own web API using Azure Active Directory B2C](enable-authentication-web-api.md)
+- [Enable authentication in your own Single Page Application using Azure Active Directory B2C](enable-authentication-spa-app.md)
+- [Publish your Azure AD B2C app to the Azure AD app gallery](publish-app-to-azure-ad-app-gallery.md)
+- [Configure authentication in a sample Single Page application using Azure Active Directory B2C](configure-authentication-sample-spa-app.md)
+- [Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C](configure-authentication-sample-web-app-with-api.md)
+- [Configure authentication in a sample Single Page application using Azure Active Directory B2C options](enable-authentication-spa-app-options.md)
+- [Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C options](enable-authentication-web-app-with-api-options.md)
+- [Enable authentication in your own web application that calls a web API using Azure Active Directory B2C](enable-authentication-web-app-with-api.md)
+- [Sign-in options in Azure AD B2C](sign-in-options.md)
+
+### Updated articles
+
+- [User profile attributes](user-profile-attributes.md)
+- [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md)
+- [Configure authentication in a sample web application using Azure Active Directory B2C options](enable-authentication-web-application-options.md)
+- [Set up a sign-in flow in Azure Active Directory B2C](add-sign-in-policy.md)
+- [Set up a sign-up and sign-in flow in Azure Active Directory B2C](add-sign-up-and-sign-in-policy.md)
+- [Set up the local account identity provider](identity-provider-local.md)
+- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)
+- [Add user attributes and customize user input in Azure Active Directory B2C](configure-user-input.md)
+- [Azure Active Directory B2C service limits and restrictions](service-limits.md)
++ ## May 2021 ### New articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 06/02/2021 Last updated : 07/12/2021
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## June 2021
+
+### New articles
+
+- [Configure provisioning using Microsoft Graph APIs](application-provisioning-configuration-api.md)
+- [Understand how expression builder in Application Provisioning works](expression-builder.md)
+
+### Updated articles
+
+- [How Application Provisioning works in Azure Active Directory](how-provisioning-works.md)
+- [Plan cloud HR application to Azure Active Directory user provisioning](plan-cloud-hr-provision.md)
+- [How Azure Active Directory provisioning integrates with Workday](workday-integration-reference.md)
++ ## May 2021 ### Updated articles
active-directory Application Proxy Integrate With Remote Desktop Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-remote-desktop-services.md
Previously updated : 04/27/2021 Last updated : 07/12/2021
In an RDS deployment, the RD Web role and the RD Gateway role run on Internet-fa
## Requirements - Both the RD Web and RD Gateway endpoints must be located on the same machine, and with a common root. RD Web and RD Gateway are published as a single application with Application Proxy so that you can have a single sign-on experience between the two applications.-- You should already have [deployed RDS](/windows-server/remote/remote-desktop-services/rds-in-azure), and [enabled Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). Ensure you have satisfied the pre-requisites to enable Application Proxy, such as installing the connector, opening required ports and URLS, and enabling TLS 1.2 on the server.
+- You should already have [deployed RDS](/windows-server/remote/remote-desktop-services/rds-in-azure), and [enabled Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). Ensure you have satisfied the pre-requisites to enable Application Proxy, such as installing the connector, opening required ports and URLS, and enabling TLS 1.2 on the server. To learn which ports need to be opened, and other details, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md).
- Your end users must use a compatible browser to connect to RD Web or the RD Web client. For more details see [Support for client configurations](#support-for-other-client-configurations). - When publishing RD Web, it is recommended to use the same internal and external FQDN. If the internal and external FQDNs are different then you should disable Request Header Translation to avoid the client receiving invalid links. - If you are using RD Web on Internet Explorer, you will need to enable the RDS ActiveX add-on.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 06/02/2021 Last updated : 07/12/2021
Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## June 2021
+
+### Updated articles
+
+- [Secure access to on-premises APIs with Azure Active Directory Application Proxy](application-proxy-secure-api-access.md)
+ ## May 2021 ### Updated articles
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/terms-of-use.md
Previously updated : 07/06/2021 Last updated : 07/12/2021
Users are only required to accept the terms of use policy once and they will not
Users can review and see the terms of use policies that they have accepted by using the following procedure.
-1. Sign in to [https://myapps.microsoft.com](https://myapps.microsoft.com).
-1. In the upper right corner, click your name and select **Profile**.
-
- ![MyApps site with the user's pane open](./media/terms-of-use/tou14.png)
-
-1. On your Profile page, click **Review terms of use**.
-
- ![Profile page for a user showing the Review terms of use link](./media/terms-of-use/tou13a.png)
-
-1. From there, you can review the terms of use policies you have accepted.
+1. Sign in to [https://myaccount.microsoft.com/](https://myaccount.microsoft.com/).
+1. Select **Settings & Privacy**.
+1. Select **Privacy**.
+1. Under **Organization's notice**, select **View** next to the terms of use statement you want to review.
## Edit terms of use details
A: You can [review previously accepted terms of use policies](#how-users-can-rev
A: If you have configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user will be required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409). **Q: What endpoints does the terms of use service use for authentication?**<br />
-A: Terms of use utilizes the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com and https://account.activedirectory.windowsazure.com. If your organization has an allow list of URLs for enrollment, you will need to add these endpoints to your allow list, along with the Azure AD endpoints for sign-in.
+A: Terms of use utilizes the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you will need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
## Next steps
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
After you've added Google as one of your application's sign-in options, on the *
> [!IMPORTANT] >
-> - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication could be blocked for Gmail users (with the error screen shown below in [What to expect](#what-to-expect)). This issue occurs only if you create Google integration for self-service sign-up user flows after July 12, 2021 and Gmail authentications in your custom or line-of-business applications havenΓÇÖt been moved to system web-views. Because system web-views are enabled by default, most apps will not be affected. To avoid the issue, we strongly advise you to move Gmail authentications to system browsers before creating any new Google integrations for self-service sign-up. Please refer to [Action needed for embedded web-views](#action-needed-for-embedded-frameworks).
+> - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up or for inviting external users for their custom or line-of-business applications, authentication could be blocked for Gmail users (with the error screen shown below in [What to expect](#what-to-expect)). This issue occurs only if you create Google integration for self-service sign-up user flows or invitations after July 12, 2021 and Gmail authentications in your custom or line-of-business applications havenΓÇÖt been moved to system web-views. Because system web-views are enabled by default, most apps will not be affected. To avoid the issue, we strongly advise you to move Gmail authentications to system browsers before creating any new Google integrations for self-service sign-up. Please refer to [Action needed for embedded web-views](#action-needed-for-embedded-frameworks).
> - **Starting September 30, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](#deprecation-of-web-view-sign-in-support). ## What is the experience for the Google user?
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 06/02/2021 Last updated : 07/12/2021
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## June 2021
+
+### New articles
+
+- [Azure Active Directory (Azure AD) identity provider for External Identities](azure-ad-account.md)
+
+### Updated articles
+
+- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
+- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)
+- [Quickstart: Add guest users to your directory in the Azure portal](b2b-quickstart-add-guest-users-portal.md)
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
+- [Add Google as an identity provider for B2B guest users](google-federation.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+ ## May 2021 ### New articles
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
+
+ Title: Azure AD B2C Deployment
+description: Azure Active Directory B2C Deployment guide
+++++ Last updated : 7/12/2021++++++++
+# Azure Active Directory B2C deployment plans
+
+Azure Active Directory B2C is a scalable identity and access management solution. Its high flexibility to meet your business expectations and smooth integration with existing infrastructure enables further digitalization.
+
+To help organizations understand the business requirements and respect compliance boundaries, a step-by-step approach is recommended throughout an Azure Active Directory (Azure AD) B2C deployment.
+
+| Capability | Description |
+|:--|:|
+| [Plan](#plan-an-azure-ad-b2c-deployment) | Prepare Azure AD B2C projects for deployment. Start by identifying the stakeholders and later defining a project timeline. |
+| [Implement](#implement-an-azure-ad-b2c-deployment) | Start with enabling authentication and authorization and later perform full application onboarding. |
+| [Monitor](#monitor-an-azure-ad-b2c-solution) | Enable logging, auditing, and reporting once an Azure AD B2C solution is in place. |
+
+## Plan an Azure AD B2C deployment
+
+This phase includes the following capabilities.
+
+| Capability | Description |
+|:|:|
+|[Business requirements review](#business-requirements-review) | Assess your organizationΓÇÖs status and expectations |
+| [Stakeholders](#stakeholders) |Build your project team |
+|[Communication](#communication) | Communicate with your team about the project |
+| [Timeline](#timeline) | Reminder of key project milestones |
+
+### Business requirements review
+
+- Assess the primary reason to switch off existing systems and [move to Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/overview).
+
+- For a new application, [plan and design](https://docs.microsoft.com/azure/active-directory-b2c/best-practices#planning-and-design) the Customer Identity Access Management (CIAM) system
+
+- Identify customer's location and [create a tenant in the corresponding datacenter](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant).
+
+- Check the type of applications you have
+ - Check the platforms that are currently supported - [MSAL](https://docs.microsoft.com/azure/active-directory/develop/msal-overview) or [Open source](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9).
+ - For backend services, use the [client credentials flow](https://docs.microsoft.com/azure/active-directory/develop/msal-authentication-flows#client-credentials).
+
+- If you intend to migrate from an existing Identity Provider (IdP)
+
+ - Consider using the [seamless migration approach](https://docs.microsoft.com/azure/active-directory-b2c/user-migration#seamless-migration)
+ - Learn [how to migrate the existing applications](https://github.com/azure-ad-b2c/user-migration)
+ - Ensure the coexistence of multiple solutions at once.
+
+- Decide the protocols you want to use
+
+ - If you're currently using Kerberos, NTLM, and WS-Fed, [migrate and refactor your applications](https://www.bing.com/videos/search?q=application+migration+in+azure+ad+b2c&docid=608034225244808069&mid=E21B87D02347A8260128E21B87D02347A8260128&view=detail&FORM=VIRE). Once migrated, your applications can support modern identity protocols such as OAuth 2.0 and OpenID Connect (OIDC) to enable further identity protection and security.
+
+### Stakeholders
+
+When technology projects fail, it's typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right
+stakeholders](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-deployment-plans#include-the-right-stakeholders) and that stakeholders understand their roles.
+
+- Identify the primary architect, project manager, and owner for the application.
+
+- Consider providing a Distribution List (DL). Using this DL, you can communicate product issues with the Microsoft account team or engineering. You can ask questions, and receive important notifications.
+
+- Identify a partner or resource outside of your organization who can support you.
+
+### Communication
+
+Communication is critical to the success of any new service. Proactively communicate to your users about the change. Timely inform them about how their experience will change, when it will change, and how to gain support if they experience issues.
+
+### Timeline
+
+Define clear expectations and follow up plans to meet key milestones:
+
+- Expected pilot date
+
+- Expected launch date
+
+- Any dates that may affect project delivery date
+
+## Implement an Azure AD B2C deployment
+
+This phase includes the following capabilities.
+
+| Capability | Description |
+|:-|:--|
+| [Deploy authentication and authorization](#deploy-authentication-and-authorization) | Understand the [authentication and authorization](https://docs.microsoft.com/azure/active-directory/develop/authentication-vs-authorization) scenarios |
+| [Deploy applications and user identities](#deploy-applications-and-user-identities) | Plan to deploy client application and migrate user identities |
+| [Client application onboarding and deliverables](#client-application-onboarding-and-deliverables) | Onboard the client application and test the solution |
+| [Security](#security) | Enhance the security of your Identity solution |
+|[Compliance](#compliance) | Address regulatory requirements |
+|[User experience](#user-experience) | Enable a user-friendly service |
+
+### Deploy authentication and authorization
+
+- Start with [setting up an Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant).
+
+- For business driven authorization, use the [Azure AD B2C Identity Experience Framework (IEF) sample user journeys](https://github.com/azure-ad-b2c/samples#local-account-policy-enhancements)
+
+- Try [Open policy agent](https://www.openpolicyagent.org/).
+
+Learn more about Azure AD B2C in [this developer course](https://aka.ms/learnaadb2c).
+
+Follow this sample checklist for more guidance:
+
+- Identify the different personas that need access to your application.
+
+- Define how you manage permissions and entitlements in your existing system today and how to plan for the future.
+
+- Check if you have a permission store and if there any permissions that need to be added to the directory.
+
+- If you need delegated administration define how to solve it. For example, your customers' customers management.
+
+- Check if your application calls directly an API Manager (APIM). There may be a need to call from the IdP before issuing a token to the application.
+
+### Deploy applications and user identities
+
+All Azure AD B2C projects start with one or more client applications, which may have different business goals.
+
+1. [Create or configure client applications](https://docs.microsoft.com/azure/active-directory-b2c/app-registrations-training-guide). Refer to these [code samples](https://docs.microsoft.com/azure/active-directory-b2c/code-samples) for implementation.
+
+2. Next, setup your user journey based on built-in or custom user flows. [Learn when to use user flows vs. custom policies](https://docs.microsoft.com/azure/active-directory-b2c/user-flow-overview#comparing-user-flows-and-custom-policies).
+
+3. Setup IdPs based on your business need. [Learn how to add Azure Active Directory B2C as an IdP](https://docs.microsoft.com/azure/active-directory-b2c/add-identity-provider).
+
+4. Migrate your users. [Learn about user migration approaches](https://docs.microsoft.com/azure/active-directory-b2c/user-migration). Refer to [Azure AD B2C IEF sample user journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios.
+
+Consider this sample checklist as you **deploy your applications**:
+
+- Check the number of applications that are in scope for the CIAM deployment.
+
+- Check the type of applications that are in use. For example, traditional web applications, APIs, Single page apps (SPA), or Native mobile applications.
+
+- Check the kind of authentication that is in place. For example, forms based, federated with SAML, or federated with OIDC.
+ - If OIDC, check the response type - code or id_token.
+
+- Check if all the frontend and backend applications are hosted in on-premises, cloud, or hybrid-cloud.
+
+- Check the platforms/languages used such as, [ASP.NET](https://docs.microsoft.com/azure/active-directory-b2c/quickstart-web-app-dotnet), Java, and Node.js.
+
+- Check where the current user attributes are stored. It could be Lightweight Directory Access Protocol (LDAP) or databases.
+
+Consider this sample checklist as you **deploy user identities**:
+
+- Check the number of users accessing the applications.
+
+- Check the type of IdPs that are needed. For example, Facebook, local account, and [Active Directory Federation Services (AD FS)](https://docs.microsoft.com/windows-server/identity/active-directory-federation-services).
+
+- Outline the claim schema that is required from your application, [Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/claimsschema), and your IdPs if applicable.
+
+- Outline the information that is required to capture during a [sign-in/sign-up flow](https://docs.microsoft.com/azure/active-directory-b2c/add-sign-up-and-sign-in-policy?pivots=b2c-user-flow).
+
+### Client application onboarding and deliverables
+
+Consider this sample checklist while you **onboard an application**:
+
+| Task | Description |
+|:--|:-|
+| Define the target group of the application | Check if this application is an end customer application, business customer application, or a digital service. Check if there is a need for employee login. |
+| Identify the business value behind an application | Understand the full business case behind an application to find the best fit of Azure AD B2C solution and integration with further client applications.|
+| Check the identity groups you have | Cluster identities in different types of groups with different types of requirements, such as **Business to Customer** (B2C) for end customers and business customers, **Business to Business** (B2B) for partners and suppliers, **Business to Employee** (B2E) for your employees and external employees, **Business to Machine** (B2M) for IoT device logins and service accounts.|
+| Check the IdP you need for your business needs and processes | Azure AD B2C [supports several types of IdPs](https://docs.microsoft.com/azure/active-directory-b2c/add-identity-provider#select-an-identity-provider) and depending on the use case the right IdP should be chosen. For example, for a Customer to Customer mobile application a fast and easy user login is required. In another use case, for a Business to Customer with digital services additional compliance requirements are necessary. The user may need to log in with their business identity such as E-mail login. |
+| Check the regulatory constraints | Check if there is any reason to have remote profiles or specific privacy policies. |
+| Design the sign-in and sign-up flow | Decide whether an email verification or email verification inside sign-ups will be needed. First check-out process such as Shop systems or [Azure AD Multi-Factor Authentication (MFA)](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) is needed or not. Watch [this video](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). |
+| Check the type of application and authentication protocol used or that will be implemented | Information exchange about the implementation of client application such as Web application, SPA, or Native application. Authentication protocols for client application and Azure AD B2C could be OAuth, OIDC, and SAML. Watch [this video](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9)|
+| Plan user migration | Discuss the possibilities of [user migration with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/user-migration#:~:text=Pre%20Migration%20Flow%20in%20Azure%20AD%20B2C%20In,B2C%20directory%20with%20the%20current%20credentials.%20See%20More.). There are several scenarios possible such as Just In Times (JIT) migration, and bulk import/export. Watch [this video](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2). You can also consider using [Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=3) for user migration.|
+
+Consider this sample checklist while you **deliver**.
+
+| Capability | Description |
+|:--|:-|
+|Protocol information| Gather the base path, policies, metadata URL of both variants. Depending on the client application, specify the attributes such as sample login, client application ID, secrets, and redirects.|
+| Application samples | Refer to the provided [sample codes](https://docs.microsoft.com/azure/active-directory-b2c/code-samples). |
+|Pen testing | Before the tests, inform your operations team about the pen tests and then test all user flows including the OAuth implementation. Learn more about [Penetration testing](https://docs.microsoft.com/azure/security/fundamentals/pen-testing) and the [Microsoft Cloud unified penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
+| Unit testing | Perform unit testing and generate tokens [using Resource owner password credential (ROPC) flows](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth-ropc). If you hit the Azure AD B2C token limit, [contact the support team](https://docs.microsoft.com/azure/active-directory-b2c/support-options). Reuse tokens to reduce investigation efforts on your infrastructure. [Setup a ROPC flow](https://docs.microsoft.com/azure/active-directory-b2c/add-ropc-policy?tabs=app-reg-ga&pivots=b2c-user-flow).|
+| Load testing | Expect reaching Azure AD B2C [service limits](https://docs.microsoft.com/azure/active-directory-b2c/service-limits). Evaluate the expected number of authentications per month your service will have. Evaluate the expected number of average user logins per month. Assess the expected high load traffic durations and business reason such as holidays, migrations, and events. Evaluate the expected peak sign-up rate, for example, number of requests per second. Evaluate the expected peak traffic rate with MFA, for example, requests per second. Evaluate the expected traffic geographic distribution and their peak rates.
+
+### Security
+
+Consider this sample checklist to enhance the security of your application depending on your business needs:
+
+- Check if strong authentication method such as [MFA](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) is required. For users who trigger high value transactions or other risk events its suggested to use MFA. For example, for banking and finance applications, online shops - first checkout process.
+
+- Check if MFA is required, [check the methods available to do MFA](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods#:~:text=How%20each%20authentication%20method%20works%20%20%20,%20%20MFA%20%204%20more%20rows%20) such as SMS/Phone, email, and third-party services.
+
+- Check if any anti-bot mechanism is in use today with your applications.
+
+- Assess the risk of attempts to create fraudulent accounts and log-ins. Use [Microsoft Dynamics 365 Fraud Protection assessment](https://docs.microsoft.com/azure/active-directory-b2c/partner-dynamics-365-fraud-protection) to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
+
+- Check for any special conditional postures that need to be applied as part of sign-in or sign-up for accounts with your application.
+
+>[!NOTE]
+>You can use [Conditional Access rules](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) to adjust the difference between user experience and security based on your business goals.
+
+For more information, see [Identity Protection and Conditional Access in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/conditional-access-identity-protection-overview).
+
+### Compliance
+
+To satisfy certain regulatory requirements you may consider using vNets, IP restrictions, Web Application Firewall (WAF), and similar services to enhance the security of your backend systems.
+
+To address basic compliance requirements, consider:
+
+- The specific regulatory compliance requirements, for example, PCI-DSS that you need to support.
+
+- Check if it's required to store data into a separate database store. If so, check if this information must never be written into the directory.
+
+### User experience
+
+Consider the sample checklist to define the user experience (UX) requirements:
+
+- Identify the required integrations to [extend CIAM capabilities and build seamless end-user experiences](https://docs.microsoft.com/azure/active-directory-b2c/partner-gallery).
+
+- Provide screenshots and user stories to show the end-user experience for the existing application. For example, provide screenshots for sign-in, sign-up, combined sign-up sign-in (SUSI), profile edit, and password reset.
+
+- Look for existing hints passed through using queryString parameters in your current CIAM solution.
+
+- If you expect high UX customization such as pixel to pixel, you may need a front-end developer to help you.
+
+## Monitor an Azure AD B2C solution
+
+This phase includes the following capabilities:
+
+| Capability | Description |
+|:|:-|
+| Monitoring |[Monitor Azure AD B2C with Azure Monitor](https://docs.microsoft.com/azure/active-directory-b2c/azure-monitor). Watch [this video](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1)|
+| Auditing and Logging | [Access and review audit logs](https://docs.microsoft.com/azure/active-directory-b2c/view-audit-logs)
+
+## Next steps
+
+- [Azure AD B2C best practices](https://docs.microsoft.com/azure/active-directory-b2c/best-practices)
+
+- [Azure AD B2C service limits](https://docs.microsoft.com/azure/active-directory-b2c/service-limits)
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
A catalog is a container of resources and access packages. You create a catalog
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
+> [!NOTE]
+> Users that have been assigned the User administrator role will no longer be able to create catalogs or manage access packages in a catalog they do not own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the **Identity Governance administrator** role.
+ 1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. 1. In the left menu, click **Catalogs**.
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate.md
The following table lists the tasks that the entitlement management roles can do
## Required roles to add resources to a catalog
-A Global administrator can add or remove any group (cloud-created security groups or cloud-created Microsoft 365 Groups), application, or SharePoint Online site in a catalog. A User administrator can add or remove any group or application in a catalog, except for a group configured as assignable to a directory role. Note that a user administrator can manage access packages in a catalog that includes groups configured as assignable to a directory role. For more information on role-assignable groups, reference [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md).
+A Global administrator can add or remove any group (cloud-created security groups or cloud-created Microsoft 365 Groups), application, or SharePoint Online site in a catalog. A User administrator can add or remove any group or application in a catalog, except for a group configured as assignable to a directory role. For more information on role-assignable groups, reference [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md).
+
+> [!NOTE]
+> Users that have been assigned the User administrator role will no longer be able to create catalogs or manage access packages in a catalog they do not own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the **Identity Governance administrator** role.
For a user who isn't a global administrator, to add groups, applications, or SharePoint Online sites to a catalog, that user must have *both* an Azure AD directory role or ownership of the resource, and a and catalog owner entitlement management role for the catalog. The following table lists the role combinations that are required to add resources to a catalog. To remove resources from a catalog, you must have the same roles.
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-assign-users.md
Previously updated : 09/01/2020 Last updated : 07/12/2021
To assign users to an app that you added to your Azure AD tenant, you need:
2. In the left navigation menu, select **Users and groups**. > [!NOTE] > Some of the Microsoft 365 apps require the use of PowerShell.
-3. Select the **Add user** button.
+3. Select the **Add user/groups** button.
4. On the **Add Assignment** pane, select **Users and groups**. 5. Select the user or group you want to assign to the application. You can also start typing the name of the user or group in the search box. You can choose multiple users and groups, and your selections will appear under **Selected items**. > [!IMPORTANT]
To assign users to an app that you added to your Azure AD tenant, you need:
> [!NOTE] > Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups are not currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory). 6. When finished, choose **Select**.
- ![Assign a user or group to the app](./media/assign-user-or-group-access-portal/assign-users.png)
+ :::image type="content" source="./media/assign-user-or-group-access-portal/assign-users.png" alt-text="Assign a user or group to the app":::
7. On the **Users and groups** pane, select one or more users or groups from the list and then choose the **Select** button at the bottom of the pane. 8. If the application supports it, you can assign a role to the user or group. On the **Add Assignment** pane, choose **Select Role**. Then, on the **Select Role** pane, choose a role to apply to the selected users or groups, then select **OK** at the bottom of the pane. > [!NOTE]
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-configure.md
Previously updated : 10/29/2019 Last updated : 07/12/2021
To use a custom logo:
4. Select the icon to upload the logo. 5. When you're finished, select **Save**.
- ![Screenshot of the Properties screen that shows how to change the logo.](media/add-application-portal/change-logo.png)
+ :::image type="content" source="media/add-application-portal/change-logo.png" alt-text="Screenshot of the Properties screen that shows how to change the logo.":::
> [!NOTE] > The thumbnail displayed on this **Properties** pane doesn't update right away. You can close and reopen the **Properties** pane to see the updated icon.
You can use the notes field to add any information that is relevant for the mana
2. In the **Manage** section, select **Properties** to open the **Properties** pane for editing. 3. Update the Notes field, select **Save**.
- ![Screenshot of the Properties screen that shows how to change the notes](media/add-application-portal/notes-application.png)
- ## Clean up resources If you're not going to continue with the quickstart series, then consider deleting the app to clean up your test tenant. Deleting the app is covered in the last quickstart in this series, see [Delete an app](delete-application-portal.md).
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
To set up single sign-on for an application:
> [!IMPORTANT] > If the app uses the OpenID Connect (OIDC) standard for SSO then you will not see a single sign-on option in the navigation. Refer to the quickstart on OIDC-based SSO to learn how to set it up.
- :::image type="content" source="media/add-application-portal-setup-sso/configure-sso.png" alt-text="Screenshot shows the Single sign-on config page in the Azure AD portal.":::
- 1. Select **SAML** to open the SSO configuration page. In this example, the application we're configuring for SSO is GitHub. After GitHub is set up, your users can sign in to GitHub by using their credentials from your Azure AD tenant. :::image type="content" source="media/add-application-portal-setup-sso/github-sso.png" alt-text="Screenshot shows the Single sign-on config page on GitHub.":::
To set up single sign-on for an application:
> [!TIP] > To learn more about the SAML configuration options, see [Configure SAML-based single sign-on](configure-saml-single-sign-on.md).
- :::image type="content" source="media/add-application-portal-setup-sso/github-pricing.png" alt-text="Screenshot shows the Single sign-on option in the Enterprise subscription of the GitHub pricing page.":::
- > [!TIP] > You can automate app management using the Graph API, see [Automate app management with Microsoft Graph API](/graph/application-saml-sso-configure-api).
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
Previously updated : 1/5/2021 Last updated : 07/12/2021
To delete an application from your Azure AD tenant, you need:
To delete an application from your Azure AD tenant:
-1. In the Azure AD portal, select **Enterprise applications**. Then find and select the application you want to delete. In this case, we deleted the **GitHub_test** application that we added in the previous quickstart.
+1. In the Azure AD portal, select **Enterprise applications**. Then find and select the application you want to delete. In this case, we want to delete the **360 Online**.
1. In the **Manage** section in the left pane, select **Properties**. 1. Select **Delete**, and then select **Yes** to confirm you want to delete the app from your Azure AD tenant. + > [!TIP] > You can automate app management using the Graph API, see [Automate app management with Microsoft Graph API](/graph/application-saml-sso-configure-api).
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/view-applications-portal.md
Previously updated : 04/09/2019 Last updated : 07/09/2021
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 06/02/2021 Last updated : 07/12/2021
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## June 2021
+
+### Updated articles
+
+- [Quickstart: Add an application to your Azure Active Directory (Azure AD) tenant](add-application-portal.md)
+- [Configure group owner consent to apps accessing group data](configure-user-consent-groups.md)
+- [Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-configure.md)
+- [Manage user assignment for an app in Azure Active Directory](assign-user-or-group-access-portal.md)
+- [Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md)
+- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
+- [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
++ ## May 2021 ### Updated articles
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
This role also grants the ability to consent for delegated permissions and appli
## Application Developer
-Users in this role can create application registrations when the "Users can register applications" setting is set to No. This role also grants permission to consent on one's own behalf when the "Users can consent to apps accessing company data on their behalf" setting is set to No. Users assigned to this role are added as owners when creating new application registrations or enterprise applications.
+Users in this role can create application registrations when the "Users can register applications" setting is set to No. This role also grants permission to consent on one's own behalf when the "Users can consent to apps accessing company data on their behalf" setting is set to No. Users assigned to this role are added as owners when creating new application registrations.
> [!div class="mx-tableFixed"] > | Actions | Description |
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
AKS uses several managed identities for built-in services and add-ons.
| Identity | Name | Use case | Default permissions | Bring your own identity |-|--|-| | Control plane | not visible | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS managed public IPs, and Cluster Autoscaler operations | Contributor role for Node resource group | Supported
-| Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR) | NA (for kubernetes v1.15+) | Supported (Preview)
+| Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR) | NA (for kubernetes v1.15+) | Supported
| Add-on | AzureNPM | No identity required | NA | No | Add-on | AzureCNI network monitoring | No identity required | NA | No | Add-on | azure-policy (gatekeeper) | No identity required | NA | No
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/firewall-integration.md
description: Learn how to integrate with Azure Firewall to secure outbound traff
ms.assetid: 955a4d84-94ca-418d-aa79-b57a5eb8cb85 Previously updated : 03/25/2021 Last updated : 07/07/2021
With an Azure Firewall, you automatically get everything below configured with t
| \*.identity.azure.net:443 | | \*.ctldl.windowsupdate.com:80 | | \*.ctldl.windowsupdate.com:443 |
+| \*.prod.microsoftmetrics.com:443 |
#### Linux dependencies
Linux is not available in US Gov regions and is thus not listed as an optional c
|\*ctldl.windowsupdate.com:443 | |\*.management.usgovcloudapi.net:443 | |\*.update.microsoft.com:443 |
+|\*.prod.microsoftmetrics.com:443 |
|admin.core.usgovcloudapi.net:443 | |azperfmerges.blob.core.windows.net:443 | |azperfmerges.blob.core.windows.net:443 |
Linux is not available in US Gov regions and is thus not listed as an optional c
|gcwsprodgmdm2billing.queue.core.usgovcloudapi.net:443 | |gcwsprodgmdm2billing.table.core.usgovcloudapi.net:443 | |global.metrics.nsatc.net:443 |
+|prod.microsoftmetrics.com:443 |
|go.microsoft.com:443 | |gr-gcws-prod-bd3.usgovcloudapp.net:443 | |gr-gcws-prod-bn1.usgovcloudapp.net:443 |
app-service Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-resource-manager-templates.md
To learn about the JSON syntax and properties for App Services resources, see [M
|**Configuring an app**| **Description** | | [App certificate from Key Vault](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-certificate-from-key-vault)| Deploys an App Service app certificate from an Azure Key Vault secret and uses it for TLS/SSL binding. | | [App with a custom domain and SSL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-custom-domain-and-ssl)| Deploys an App Service app with a custom host name, and gets an app certificate from Key Vault for TLS/SSL binding. |
-| [App with a GoLang extension](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-with-golang)| Deploys an App Service app with the Golang site extension. You can then run web applications developed on Golang on Azure. |
| [App with Java 8 and Tomcat 8](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-java-tomcat)| Deploys an App Service app with Java 8 and Tomcat 8 enabled. You can then run Java applications in Azure. | | [App with regional VNet integration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/app-service-regional-vnet-integration)| Deploys an App Service app with regional VNet integration enabled. | |**Protecting an app**| **Description** |
automanage Automanage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-account.md
To grant sufficient permissions to the Automanage Account, you will need to do t
1. When prompted, enter the Object ID of the Automanage Account you created and saved down. ```azurecli-interactive
-az deployment group create --resource-group <resource group name> --template-file azuredeploy.json
+az deployment sub create --location <location> --template-file azuredeploy2.json
```+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-hotpatch.md
# Hotpatch for new virtual machines (Preview)
-Hotpatching is a new way to install updates on new Windows Server Azure Edition virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about Hotpatch for Windows Server Azure Edition VMs, which has the following benefits:
+> [!IMPORTANT]
+> Automanage for Windows Server Services is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below.
+> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+> [!NOTE]
+> Hotpatch capabilities can be found in one of these _Windows Server Azure Edition_ images: Windows Server 2019 Datacenter: Azure Edition (Core), Windows Server 2022 Datacenter: Azure Edition (Core)
+
+Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about Hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits:
* Lower workload impact with less reboots * Faster deployment of updates as the packages are smaller, install faster, and have easier patch orchestration with Azure Update Manager * Better protection, as the Hotpatch update packages are scoped to Windows security updates that install faster without rebooting
Hotpatch is available in all global Azure regions in preview. Azure Government r
## How to get started > [!NOTE]
-> During the preview phase you can only get started in the Azure portal using [this link](https://aka.ms/AzureAutomanageHotPatch).
+> During the preview phase you can get started in the Azure portal using [this link](https://aka.ms/AutomanageWindowsServerPreview).
To start using Hotpatch on a new VM, follow these steps: 1. Enable preview access * One-time preview access enablement is required per subscription. * Preview access can be enabled through API, PowerShell, or CLI as described in the following section. 1. Create a VM from the Azure portal
- * During the preview, you'll need to get started using [this link](https://aka.ms/AzureAutomanageHotPatch).
+ * During the preview, you'll need to get started using [this link](https://aka.ms/AutomanageWindowsServerPreview).
1. Supply VM details
- * Ensure that _Windows Server 2019 Datacenter: Azure Edition_ is selected in the Image dropdown)
+ * Ensure that the supported _Windows Server Azure Edition_ image that you would like to use is selected in the Image dropdown. Supported images are listed at the top of this article.
* On the Management tab step, scroll down to the ΓÇÿGuest OS updatesΓÇÖ section. You'll see Hotpatching set to On and Patch installation defaulted to Azure-orchestrated patching. * Automanage VM Best Practices will be enabled by default 1. Create your new VM
az provider register --namespace Microsoft.Compute
## Patch installation
-During the preview, [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled automatically for all VMs created with _Windows Server 2019 Datacenter: Azure Edition_. With automatic VM guest patching enabled:
+During the preview, [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled automatically for all VMs created with a supported _Windows Server Azure Edition_ image. With automatic VM guest patching enabled:
* Patches classified as Critical or Security are automatically downloaded and applied on the VM. * Patches are applied during off-peak hours in the VM's time zone. * Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-patching).
During the preview, [Automatic VM Guest Patching](../virtual-machines/automatic-
When [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
-With Hotpatch enabled on _Windows Server 2019 Datacenter: Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months will require VM reboots. Additional Critical or Security patches may also be available periodically which may require VM reboots.
+With Hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months will require VM reboots. Additional Critical or Security patches may also be available periodically which may require VM reboots.
The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
Similar to on-demand assessment, you can also install patches on-demand for your
Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-Hotpatch) Windows update channel.
-There are some important considerations to running a Windows Server Azure edition VM with Hotpatch enabled. Reboots are still required to install updates that aren't included in the Hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update.
+There are some important considerations to running a supported _Windows Server Azure Edition_ VM with Hotpatch enabled. Reboots are still required to install updates that aren't included in the Hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update.
* Patches that are currently not included in the Hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and will require a reboot. ## Frequently asked questions ### What is hotpatching?
-* Hotpatching is a new way to install updates on a Windows Server 2019 Datacenter: Azure Edition VM in Azure that doesnΓÇÖt require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process.
+* Hotpatching is a new way to install updates on a supported _Windows Server Azure Edition_ VM in Azure that doesnΓÇÖt require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process.
### How does hotpatching work?
There are some important considerations to running a Windows Server Azure editio
### Why should I use Hotpatch?
-* When you use Hotpatch on Windows Server 2019 Datacenter: Azure Edition, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
+* When you use Hotpatch on a supported _Windows Server Azure Edition_ image, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
### What types of updates are covered by Hotpatch?
There are some important considerations to running a Windows Server Azure editio
### Can I upgrade from my existing Windows Server OS?
-* Upgrading from existing versions of Windows Server (that is, Windows Server 2016 or 2019 non-Azure editions) isn't supported currently. Upgrading to future releases of Windows Server Azure Edition will be supported.
+* Upgrading from existing versions of Windows Server (that is, Windows Server 2016 or 2019 non-Azure editions) to _Windows Server 2022 Datacenter: Azure Edition_ is supported. Upgrading to _Windows Server 2019 Datacenter: Azure Edition_ isn't supported.
### Can I use Hotpatch for production workloads during the preview?
There are some important considerations to running a Windows Server Azure editio
### Will I be charged during the preview?
-* The license for Windows Server Azure Edition is free during the preview. However, the cost of any underlying infrastructure set up for your VM (storage, compute, networking, etc.) will still be charged to your subscription.
+* The license for _Windows Server Azure Edition_ is free during the preview. However, the cost of any underlying infrastructure set up for your VM (storage, compute, networking, etc.) will still be charged to your subscription.
### How can I get troubleshooting support for Hotpatching?
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server-services-overview.md
Previously updated : 06/23/2021 Last updated : 07/09/2021 # Automanage for Windows Server Services (preview)
-Automanage for Windows Server Services brings new capabilities specifically to Windows Server Azure Edition. These capabilities include:
+Automanage for Windows Server Services brings new capabilities specifically to _Windows Server Azure Edition_. These capabilities include:
- Hotpatch - SMB over QUIC - Extended Network
Automanage for Windows Server Services brings new capabilities specifically to W
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Automanage for Windows Server capabilities can be found in one or more of these Windows Server Azure Edition images:
-
-> [!NOTE]
-> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
+Automanage for Windows Server capabilities can be found in one or more of these _Windows Server Azure Edition_ images:
- Windows Server 2019 Datacenter: Azure Edition (Core) - Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)
Capabilities vary by image, see [getting started](#getting-started-with-windows-
Hotpatch is available in public preview on the following images:
-> [!NOTE]
-> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
- - Windows Server 2019 Datacenter: Azure Edition (Core) - Windows Server 2022 Datacenter: Azure Edition (Core)
Hotpatch gives you the ability to apply security updates on your VM without rebo
SMB over QUIC is available in public preview on the following images:
-> [!NOTE]
-> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
- - Windows Server 2022 Datacenter: Azure Edition (Desktop experience) - Windows Server 2022 Datacenter: Azure Edition (Core)
SMB over QUIC enables users to access files when working remotely without a VPN,
Azure Extended Network is available in public preview on the following images:
-> [!NOTE]
-> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
- - Windows Server 2022 Datacenter: Azure Edition (Desktop experience) - Windows Server 2022 Datacenter: Azure Edition (Core)
Azure Extended Network enables you to stretch an on-premises subnet into Azure t
## Getting started with Windows Server Azure Edition
-> [!NOTE]
-> Not all images and regions are available yet in Public Preview. See table below for more information about availability.
-
-It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the Windows Server Azure Edition images support only a subset of capabilities. See the table below for a matrix of capabilities and images.
+It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the _Windows Server Azure Edition_ images support only a subset of capabilities, see the table below for more details.
### Deciding which image to use
-|Image|Capabilities|Preview state|Regions|On date|
-|--|--|--|--|--|
-| Windows Server 2019 Datacenter: Azure Edition (Core) | Hotpatch | Public preview | (all) | March 12, 2021 |
-| Windows Server 2022 Datacenter: Azure Edition (Desktop experience) | SMB over QUIC, Extended Network | Public preview in some regions | North Europe, South Central US, West Central US | June 22, 2021 |
-| Windows Server 2022 Datacenter: Azure Edition (Core) | Hotpatch, SMB over QUIC, Extended Network | Public preview to start | (all) | July 12, 2021 |
+|Image|Capabilities|
+|--|--|
+| Windows Server 2019 Datacenter: Azure Edition (Core) | Hotpatch |
+|Windows Server 2022 Datacenter: Azure Edition (Desktop experience) | SMB over QUIC, Extended Network |
+| Windows Server 2022 Datacenter: Azure Edition (Core) | Hotpatch, SMB over QUIC, Extended Network |
### Creating a VM
-> [!NOTE]
-> _Windows Server 2022 Datacenter: Azure Edition (Core)_ is not yet available for Public Preview, and _Windows Server 2022 Datacenter: Azure Edition (Desktop experience)_ is not yet supported in all regions. For more information, see [getting started](#getting-started-with-windows-server-azure-edition).
-
-To start using Automanage for Windows Server capabilities on a new VM, use your preferred method to create an Azure VM, and select the Windows Server Azure Edition image that corresponds to the set of [capabilities](#getting-started-with-windows-server-azure-edition) that you would like to use. Configuration of those capabilities may be needed during VM creation. You can learn more about VM configuration in the individual capability topics (such as [Hotpatch](automanage-hotpatch.md)).
+To start using Automanage for Windows Server capabilities on a new VM, use your preferred method to create an Azure VM, and select the _Windows Server Azure Edition_ image that corresponds to the set of [capabilities](#getting-started-with-windows-server-azure-edition) that you would like to use. Configuration of those capabilities may be needed during VM creation. You can learn more about VM configuration in the individual capability topics (such as [Hotpatch](automanage-hotpatch.md)).
## Next steps
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows Server versions:
- Windows Server 2016 - Windows Server 2019 - Windows Server 2019 Azure Edition
+- Windows Server 2022
+- Windows Server 2022 Azure Edition
## Participating services
automation Automation Dsc Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-onboarding.md
required for your use case. Optionally, you can enter a node configuration to as
### Enable a VM using Azure Resource Manager templates
-You can install and enable a VM for State Configuration using Azure Resource Manager templates. See [Server managed by Desired State Configuration service](https://azure.microsoft.com/resources/templates/automation-configuration/) for an example template that enables an existing VM for State Configuration. If you are managing a virtual machine scale set, see the example template in [Virtual machine scale set configuration managed by Azure Automation](https://azure.microsoft.com/resources/templates/201-vmss-automation-dsc/).
+You can install and enable a VM for State Configuration using Azure Resource Manager templates. See [Server managed by Desired State Configuration service](https://azure.microsoft.com/resources/templates/automation-configuration/) for an example template that enables an existing VM for State Configuration. If you are managing a virtual machine scale set, see the example template in [Virtual machine scale set configuration managed by Azure Automation](https://azure.microsoft.com/resources/templates/vmss-automation-dsc/).
### Enable machines using PowerShell
avere-vfxt Avere Vfxt Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/avere-vfxt/avere-vfxt-faq.md
- Title: FAQ - Avere vFXT for Azure
-description: Use these frequently asked questions to decide if Avere vFXT for Azure fits your needs. Learn how Avere vFXT for Azure works with other Azure components.
--- Previously updated : 12/19/2019----
-# Avere vFXT for Azure FAQ
-
-This article answers questions that can help you decide if Avere vFXT for Azure is right for your needs. It gives basic information about Avere vFXT and explains how it works with other Azure components and with products from outside vendors.
-
-## General
-
-### What is Avere vFXT for Azure?
-
-Avere vFXT for Azure is a high-performance file system that caches active data in Azure compute for efficient processing of critical workloads.
-
-### Is Avere vFXT a storage solution?
-
-No. Avere vFXT for Azure is a file-system *cache* that attaches to storage environments, such as your EMC or NetApp NAS, or an Azure blob container. Avere vFXT streamlines data requests from clients, and it caches the data that it serves to improve performance at scale and over time. Avere vFXT itself does not store data. It has no information about the amount of data stored behind it.
-
-### Is Avere vFXT a tiering solution?
-
-Avere vFXT for Azure does not automatically tier data between hot and cool tiers.
-
-### How do I know if an environment is right for Avere vFXT?
-
-The best way to think about this question is to ask, "Is the workload cacheable?" That is, does the workload have a high read-to-write ratio? An example is 80/20 or 70/30 reads/writes.
-
-Consider Avere vFXT for Azure if you have a file-based analytic pipeline that runs across a large number of Azure virtual machines, and it meets one or more of the following conditions:
-
-* Overall performance is slow or inconsistent because of long file access times (tens of milliseconds or seconds, depending on requirements). This latency is unacceptable to the customer.
-
-* Data required for processing is located at the far end of a WAN environment, and moving that data permanently is impractical. The data might be in a different Azure region or in a customer datacenter.
-
-* A significant number of clients are requesting the data - for example, in a high-performance computing (HPC) cluster. The large number of concurrent requests can increase latency.
-
-* The customer wants to run their current pipeline "as is" in Azure virtual machines, and needs a POSIX-based shared storage (or caching) solution for scalability. By using Avere vFXT for Azure, you don't have to rearchitect the work pipeline to make native calls to Azure Blob storage.
-
-* Your HPC application is based on NFSv3 clients. (In some circumstances, it can use SMB 2.1 clients, but performance is limited.)
-
-The following diagram can help you answer to this question. The closer your workflow is to the upper right, the more likely it is that the Avere vFXT for Azure caching solution is right for your environment.
-
-![Graph diagram showing that read-heavy loads with thousands of clients are better suited for Avere vFXT](media/avere-vfxt-fit-assessment.png)
-
-### At what scale of clients does the Avere vFXT solution make the most sense?
-
-The Avere vFXT cache solution is built to handle hundreds, thousands, or tens of thousands of compute cores. If you have a few machines running light work, Avere vFXT is not the right solution.
-
-Typical Avere vFXT customers run demanding workloads starting at about 1,000 CPU cores. These environments can be as large as 50,000 cores or more. Because Avere vFXT is scalable, you can add nodes to support these workloads as they grow to require more throughput or more IOPS.
-
-### How much data can an Avere vFXT environment store?
-
-Avere vFXT for Azure is a cache. It doesn't specifically store data. It uses a combination of RAM and SSDs to store the cached data. The data is permanently stored on a back-end storage system (for example, a NetApp NAS system or a blob container). The Avere vFXT system does not have information about the amount of data stored behind it. Avere vFXT only caches the subset of that data that clients request.
-
-### What regions are supported?
-
-Avere vFXT for Azure is supported in all regions except for sovereign regions (China, Germany). Make sure that the region you want to use can support the large quantity of compute cores and the VM instances needed to create the Avere vFXT cluster.
-
-### How do I get help with Avere vFXT?
-
-A specialized group of support staff offers help with Avere vFXT for Azure. Follow the instructions in [Get help with your system](avere-vfxt-open-ticket.md#open-a-support-ticket-for-your-avere-vfxt) to open a support ticket from the Azure portal.
-
-### Is Avere vFXT highly available?
-
-Yes, Avere vFXT runs exclusively as an HA solution.
-
-### Does Avere vFXT for Azure also support other cloud services?
-
-Yes, customers can use more than one cloud provider with the Avere vFXT cluster. It supports AWS S3 standard buckets, Google Cloud Services standard buckets, and Azure blob containers.
-
-> [!NOTE]
-> A software fee applies to use Avere vFXT with AWS or Google Cloud storage. There is no additional software fee for using Azure blob storage.
-
-## Technical: Compute
-
-### Can you describe what an Avere vFXT environment "looks like"?
-
-Avere vFXT is a clustered appliance made of multiple Azure virtual machines. A Python library handles cluster creation, deletion, and modification. Read [What is Avere vFXT for Azure?](avere-vfxt-overview.md) to learn more.
-
-### What kind of Azure virtual machines does Avere vFXT run on?
-
-An Avere vFXT for Azure cluster uses Microsoft Azure E32s_v3 virtual machines.
-
-<!-- ### Can I mix and match virtual machine types for my cluster?
-
-No, you must choose one virtual machine type or the other.
-
-### Can I move between virtual machine types?
-
-Yes, there is a migration path to move from one VM type to the other. [Open a support ticket](avere-vfxt-open-ticket.md#open-a-support-ticket-for-your-avere-vfxt) to learn how.
>-
-### Does the Avere vFXT environment scale?
-
-The Avere vFXT cluster can be as small as three virtual machine nodes or as large as 24 nodes. Contact Azure technical support for help with planning if you believe you need a cluster of more than nine nodes. The larger number of nodes requires a larger deployment architecture.
-
-### Does the Avere vFXT environment "autoscale"?
-
-No. You can scale the cluster size up and down, but adding or removing cluster nodes is a manual step.
-
-### Can I run the Avere vFXT cluster as a virtual machine scale set?
-
-Avere vFXT does not support deployment of a virtual machine scale set. Several built-in availability support mechanisms are designed only for atomic VMs participating in a cluster.
-
-### Can I run the Avere vFXT cluster on low-priority VMs?
-
-No, the system requires an underlying stable set of virtual machines.
-
-### Can I run the Avere vFXT cluster in containers?
-
-No, Avere vFXT must be deployed as an independent application.
-
-### Do the Avere vFXT VMs count against my compute quota?
-
-Yes. Make sure you have a sufficient quota in the region to support the cluster.
-
-### Can I run the Avere vFXT cluster machines in different availability zones?
-
-No. The high availability model in Avere vFXT currently does not support individual Avere vFXT cluster members located in different availability zones.
-
-### Can I clone Avere vFXT virtual machines?
-
-No, you must use the supported Python script to add or remove nodes in the Avere vFXT cluster. For more information, read [Manage the Avere vFXT cluster](avere-vfxt-manage-cluster.md).
-
-### Is there a "VM" version of the software I can run in my own local environment?
-
-No, the system is offered as a clustered appliance and tested on specific virtual machine types. This restriction helps customers avoid creating a system that can't support the high-performance requirements of a typical Avere vFXT workflow.
-
-## Technical: Disks
-
-### What types of disks are supported for the Azure VMs?
-
-Avere vFXT for Azure can use 1-TB or 4-TB premium SSD configurations. The premium SSD configuration can be deployed as multiple managed disks.
-
-### Does the cluster support unmanaged disks?
-
-No, the cluster requires managed disks.
-
-### Does the system support local (attached) SSDs?
-
-Avere vFXT for Azure does not currently support local SSDs. Disks used for Avere vFXT must be able to shut down and restart, but local attached SSDs in this configuration can only be terminated.
-
-### Does the system support ultra SSDs?
-
-No, the system supports premium SSD configurations only.
-
-### Can I detach my premium SSDs and reattach them later to preserve cache contents between use?
-
-Detaching and reattaching SSDs is unsupported. Metadata or file contents on the source might have changed between uses, which might cause data integrity issues.
-
-### Does the system encrypt the cache?
-
-Data is striped across the disks but is not encrypted. However, the disks themselves can be encrypted. For more information, see [Secure and use policies on virtual machines in Azure](../virtual-machines/security-policy.md#encryption).
-
-## Technical: Networking
-
-### What network is recommended?
-
-If you're using on-premises storage with Avere vFXT, you should have a 1-Gbps or better network connection between your storage and the cluster. If you have a small amount of data and are willing to copy data to the cloud before running jobs, VPN connectivity might be sufficient.
-
-> [!TIP]
-> The slower the network link is, the slower the initial "cold" reads will be. Slow reads increase the latency of the work pipeline.
-
-### Can I run Avere vFXT in a different virtual network than my compute cluster?
-
-Yes, you can create your Avere vFXT system in a different virtual network. Read [Plan your Avere vFXT system](avere-vfxt-deploy-plan.md) for details.
-
-### Does Avere vFXT require its own subnet?
-
-Yes. Avere vFXT runs strictly as a high availability (HA) cluster and requires multiple IP addresses to operate. If the cluster is in its own subnet, you avoid the risk of IP address conflicts, which can cause problems for installation and normal operation. The cluster's subnet can be within a virtual network used by other resources, as long as no IP addresses overlap.
-
-### Can I run Avere vFXT on InfiniBand?
-
-No, Avere vFXT uses Ethernet/IP only.
-
-### How do I access my on-premises NAS environment from Avere vFXT?
-
-The Avere vFXT environment is like any other Azure VM in that it requires routed access through a network gateway or VPN to the customer datacenter (and back). Consider using Azure ExpressRoute connectivity if it's available in your environment.
-
-### What are the bandwidth requirements for Avere vFXT?
-
-The overall bandwidth requirement depends on two factors:
-
-* The amount of data being requested from the source
-* The client system's tolerance for latency during initial data loading
-
-For latency-sensitive environments, you should use a fiber solution with a minimum link speed of 1 Gbps. Use ExpressRoute if it's available.
-
-### Can I run Avere vFXT with public IP addresses?
-
-No, Avere vFXT is meant to be operated in a network environment secured through best practices.
-
-### Can I restrict internet access from my cluster's virtual network?
-
-In general, you can configure additional security on your virtual network as needed, but some restrictions can interfere with the operation of the cluster.
-
-For example, restricting outbound internet access from your virtual network causes problems for the cluster unless you also add a rule that explicitly allows access to AzureCloud. This situation is described in [supplemental documentation on GitHub](https://github.com/Azure/Avere/tree/master/src/vfxt/internet_access.md).
-
-For help with customized security, contact support as described in [Get help with your system](avere-vfxt-open-ticket.md#open-a-support-ticket-for-your-avere-vfxt).
-
-## Technical: Back-end storage (core filers)
-
-### How many core filers does a single Avere vFXT environment support?
-
-An Avere vFXT cluster supports up to 20 core filers.
-
-### How does the Avere vFXT environment store data?
-
-Avere vFXT is not storage. It's a cache that reads and writes data from multiple storage targets called core filers. Data stored on premium SSD disks in Avere vFXT is transient and is eventually flushed to the back-end core filer storage.
-
-### Which core filers does Avere vFXT support?
-
-In general terms, Avere vFXT for Azure supports the following systems as core filers:
-
-* Dell EMC Isilon (OneFS 7.1, 7.2, 8.0, and 8.1)
-* NetApp ONTAP (Clustered Mode 9.4, 9.3, 9.2, 9.1P1, 8.0-8.3) and (7-Mode 7.*, 8.0-8.3)
-
-* Azure blob containers (locally redundant storage only)
-* AWS S3 buckets
-* Google Cloud buckets
-
-### Why doesn't Avere vFXT support all NFS filers?
-
-Although all NFS platforms meet the same IETF standards, in practice each implementation has its own quirks. These details affect how Avere vFXT interacts with the storage system. The supported systems are the most widely used platforms in the marketplace.
-
-### Does Avere vFXT support private object storage (such as SwiftStack)?
-
-Avere vFXT does not support private object storage.
-
-### How can I get a specific storage product under support?
-
-Support is based on the amount of demand in the field. If there are enough revenue-based requests to support a NAS solution, we'll consider it. Make requests through Azure support.
-
-### Can I use Azure Blob storage as a core filer?
-
-Yes, Avere vFXT for Azure can use a block blob container as a cloud core filer.
-
-### What are the storage account requirements for a blob core filer?
-
-Your storage account must be a general-purpose v2 (GPv2) account and configured for locally redundant storage only. Geo-redundant storage and zone-redundant storage are not supported.
-
-Read [Azure Blob Storage cloud core filer](avere-vfxt-add-storage.md#azure-blob-storage-cloud-core-filer) for more details about the storage account requirements.
-
-### Can I use archive blob storage?
-
-No. The service-level agreement (SLA) for archive storage is not compatible with the real-time directory and file access needs of the Avere vFXT system.
-
-### Can I use cool blob storage?
-
-Cool tier blob storage is not usually recommended for an Avere vFXT for Azure core filer. Cool tier offers lower storage costs but higher operations costs. (See [Block blob pricing](<https://azure.microsoft.com/pricing/details/storage/blobs/>) for more details.) If data will be accessed and modified or deleted frequently, please consider using the Hot tier.
-
-[Access tiers](../storage/blobs/storage-blob-storage-tiers.md#cool-access-tier) gives more information about when it might make sense to use Cool tier storage as a vFXT core filer.
-
-### How do I encrypt the blob container?
-
-You can configure blob encryption either in Azure (preferred) or at the Avere vFXT core filer level.
-
-### Can I use my own encryption key for a blob core filer?
-
-By default, data is encrypted through Microsoft-managed keys for Azure Blob, Table, and Queue storage, plus Azure Files. You can bring your own key for encryption for Blob storage and Azure Files. If you choose to use Avere vFXT encryption, you must use the Avere-generated key and store it locally.
-
-## Purchasing
-
-### How do I get Avere vFXT for Azure licensing?
-
-Getting an Avere vFXT for Azure license is easy through the Azure Marketplace. Sign up for an Azure account, and then follow the instructions in [Deploy the Avere vFXT cluster](avere-vfxt-deploy.md) to create an Avere vFXT cluster.
-
-### How much does Avere vFXT cost?
-
-In Azure, there is no additional licensing fee for using Avere vFXT clusters. Customers are responsible for storage and other Azure consumption fees.
-
-### Can Avere vFXT VMs be run as low priority?
-
-No, Avere vFXT clusters require "always on" service. The clusters can be turned off when not needed.
-
-## Next steps
-
-To get started with Avere vFXT for Azure, read these articles to learn how to plan and deploy your own system:
-
-* [Plan your Avere vFXT system](avere-vfxt-deploy-plan.md)
-* [Deployment overview](avere-vfxt-deploy-overview.md)
-* [Prepare to create an Avere vFXT cluster](avere-vfxt-prereqs.md)
-* [Deploy the Avere vFXT cluster](avere-vfxt-deploy.md)
-
-To learn more about capabilities and use cases for Avere vFXT, visit [Avere vFXT for Azure](https://azure.microsoft.com/services/storage/avere-vfxt/).
avere-vfxt Avere Vfxt Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/avere-vfxt/avere-vfxt-whitepapers.md
This graphic shows components and layouts for using Avere vFXT for Azure to add
## Next steps * To continue planning an Avere vFXT for Azure deployment, read [Plan your Avere vFXT system](avere-vfxt-deploy-plan.md).
-* For answers to specific questions, consult the [Avere vFXT for Azure FAQ](avere-vfxt-faq.md).
+* For answers to specific questions, consult the [Avere vFXT for Azure FAQ](avere-vfxt-faq.yml).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
The following table describes the scenarios that are currently supported for Arc
|||| |East US|Available|Available |East US 2|Available|Available
-|West US|Available|Available
+|West US 2|Available|Available
|Central US|Not available|Available |South Central US|Available|Available |UK South|Available|Available
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
In this tutorial, you will apply configurations using GitOps on an Azure Arc ena
The [example repository](https://github.com/Azure/arc-k8s-demo) used in this article is structured around the persona of a cluster operator. The manifests in this repository provision a few namespaces, deploy workloads, and provide some team-specific configuration. Using this repository with GitOps creates the following resources on your cluster: * Namespaces: `cluster-config`, `team-a`, `team-b`
-* Deployment: `cluster-config/azure-vote`
+* Deployment: `arc-k8s-demo`
* ConfigMap: `team-a/endpoints`
-The `config-agent` polls Azure for new or updated configurations. This task will take up to 30 seconds.
+The `config-agent` polls Azure for new or updated configurations. This task will take up to 5 minutes.
If you are associating a private repository with the configuration, complete the steps below in [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository).
azure-arc Manage Vm Extensions Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-portal.md
# Enable Azure VM extensions from the Azure portal
-This article shows you how to deploy, update, and uninstall Azure VM extensions supported by Azure Arc enabled servers, on a Linux or Windows hybrid machine through the Azure portal.
+This article shows you how to deploy and uninstall Azure VM extensions supported by Azure Arc enabled servers, on a Linux or Windows hybrid machine through the Azure portal.
> [!NOTE] > The Key Vault VM extension (preview) does not support deployment from the Azure portal, only using the Azure CLI, the Azure PowerShell, or using an Azure Resource Manager template.
You can get a list of the VM extensions on your Arc enabled server from the Azur
:::image type="content" source="media/manage-vm-extensions/list-vm-extensions.png" alt-text="List VM extension deployed to selected machine." border="true":::
-## Update extensions
-
-When a new version of a supported extension is released, you can update the extension to that latest release. Arc enabled servers will present a banner in the Azure portal when you navigate to Arc enabled servers, informing you there are updates available for one or more extensions installed on a machine. When you view the list of installed extensions for a selected Arc enabled server, you'll notice a column labeled **Update available**. If a newer version of an extension is released, the **Update available** value for that extension shows a value of **Yes**.
-
-Updating an extension to the newest version does not affect the configuration of that extension. You are not required to respecify configuration information for any extension you update.
--
-You can update one or select multiple extensions eligible for an update from the Azure portal by performing the following steps.
-
-> [!NOTE]
-> Currently you can only update extensions from the Azure portal. Performing this operation from the Azure CLI, Azure PowerShell, or using an Azure Resource Manager template is not supported at this time.
-
-1. From your browser, go to the [Azure portal](https://portal.azure.com).
-
-2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
-
-3. Choose **Extensions**, and review the status of extensions under the **Update available** column.
-
-You can update one extension by one of three ways:
-
-* By selecting an extension from the list of installed extensions, and under the properties of the extension, select the **Update** option.
-
- :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-from-extension.png" alt-text="Update extension from selected extension." border="true":::
-
-* By selecting the extension from the list of installed extensions, and select the **Update** option from the top of the page.
-
-* By selecting one or more extensions that are eligible for an update from the list of installed extensions, and then select the **Update** option.
-
- :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-selected.png" alt-text="Update selected extension." border="true":::
- ## Uninstall extensions You can remove one or more extensions from an Arc enabled server from the Azure portal. Perform the following steps to remove an extension.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script in it, a VM extension can be used.
-Azure Arc enabled servers enables you to deploy, remove, and update Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc enabled servers:
+Azure Arc enabled servers enables you to deploy and remove Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc enabled servers:
- The [Azure portal](manage-vm-extensions-portal.md) - The [Azure CLI](manage-vm-extensions-cli.md)
Azure Arc enabled servers enables you to deploy, remove, and update Azure VM ext
> [!NOTE] > Azure Arc enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](../../virtual-machines/extensions/overview.md) article.
-> [!NOTE]
-> Currently you can only update extensions from the Azure portal. Performing this operation from the Azure CLI, Azure PowerShell, or using an Azure Resource Manager template is not supported at this time.
- ## Key benefits Azure Arc enabled servers VM extension support provides the following key benefits:
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-get-started.md
Use the following resources to get started.
::: zone pivot="programming-language-python" | Action | Resources | | | |
-| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-csharp.md?pivots=programming-language-python)<li>[Terminal/command prompt](./create-first-function-cli-csharp.md?pivots=programming-language-python) |
+| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-python.md)<li>[Terminal/command prompt](./create-first-function-cli-python.md) |
| **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) | | **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=python)<li>[Security](./security-concepts.md)<li>[Improve throughput performance](./python-scale-performance-reference.md)|
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
Azure Monitor agent uses [Data Collection Rules (DCR)](data-collection-rule-over
## Should I switch to Azure Monitor agent? Azure Monitor agent coexists with the [generally available agents for Azure Monitor](agents-overview.md), but you may consider transitioning your VMs off the current agents during the Azure Monitor agent public preview period. Consider the following factors when making this determination. -- **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environemnt support, and networking requirements will most likely be provided in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it.
+- **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will most likely be provided in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it.
- **Current and new feature requirements.** Azure Monitor agent introduces several new capabilities such as filtering, scoping, and multi-homing, but it isnΓÇÖt at parity yet with the current agents for other functionality such as custom log collection and integration with all solutions ([see solutions in preview](/azure/azure-monitor/faq#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent)). Most new capabilities in Azure Monitor will only be made available with Azure Monitor agent, so over time more functionality will only be available in the new agent. Consider whether Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If Azure Monitor agent has all the core capabilities you require, then consider transitioning to it. If there are critical features that you require, then continue with the current agent until Azure Monitor agent reaches parity. - **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date will be published for the Log Analytics agents in August 2021. The current agents will be supported for several years once deprecation begins.
Azure Monitor agent is available in all public regions that supports Log Analyti
## Supported services and features The following table shows the current support for Azure Monitor agent with other Azure services.
-| Azure service | Current support |
-|:|:|
-| [Azure Security Center](../../security-center/security-center-introduction.md) | Private preview |
-| [Azure Sentinel](../../sentinel/overview.md) | Private preview |
+| Azure service | Current support | More information |
+|:|:|:|
+| [Azure Security Center](../../security-center/security-center-introduction.md) | Private preview | [Sign up link](https://aka.ms/AMAgent) |
+| [Azure Sentinel](../../sentinel/overview.md) | Private preview | [Sign up link](https://aka.ms/AMAgent) |
The following table shows the current support for Azure Monitor agent with Azure Monitor features.
-| Azure Monitor feature | Current support |
-|:|:|
-| [VM Insights](../vm/vminsights-overview.md) | Private preview |
-| [VM Insights guest health](../vm/vminsights-health-overview.md) | Public preview |
-| [SQL insights](../insights/sql-insights-overview.md) | Public preview. |
+| Azure Monitor feature | Current support | More information |
+|:|:|:|
+| [VM Insights](../vm/vminsights-overview.md) | Private preview | [Sign up link](https://forms.office.com/r/jmyE821tTy) |
+| [VM Insights guest health](../vm/vminsights-health-overview.md) | Public preview | Available only on the new agent |
+| [SQL insights](../insights/sql-insights-overview.md) | Public preview | Available only on the new agent |
The following table shows the current support for Azure Monitor agent with Azure solutions.
-| Solution | Current support |
-|:|:|
-| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring (FIM) in Azure Security Center private preview. |
-| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (private preview) that doesnΓÇÖt require an agent. |
+| Solution | Current support | More information |
+|:|:|:|
+| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring (FIM) in Azure Security Center private preview. | [Sign up link](https://aka.ms/AMAgent) |
+| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (private preview) that doesnΓÇÖt require an agent. | [Sign up link](https://www.yammer.com/azureadvisors/threads/1064001355087872) |
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-core.md
If the SDK is installed at build time as shown in this article, you don't need t
* You can track additional custom telemetry by using the `TrackXXX()` API. * You have full control over the configuration.
-### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formally Status Monitor v2)?
+### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)?
No, [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports only ASP.NET 4.x.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 05/26/2021 Last updated : 07/06/2021
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|BackendDuration|Yes|Duration of Backend Requests|Milliseconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
-|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service|Location|
-|Duration|Yes|Overall Duration of Gateway Requests|Milliseconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
+|BackendDuration|Yes|Duration of Backend Requests|MilliSeconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
+|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service. Note: For skus other than Premium, 'Max' aggregation will show the value as 0.|Location|
+|Duration|Yes|Overall Duration of Gateway Requests|MilliSeconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
|EventHubDroppedEvents|Yes|Dropped EventHub Events|Count|Total|Number of events skipped because of queue size limit reached|Location| |EventHubRejectedEvents|Yes|Rejected EventHub Events|Count|Total|Number of rejected EventHub events (wrong configuration or unauthorized)|Location| |EventHubSuccessfulEvents|Yes|Successful EventHub Events|Count|Total|Number of successful EventHub events|Location|
For important additional information, see [Monitoring Agents Overview](../agents
|HttpIncomingRequestDuration|Yes|HttpIncomingRequestDuration|Count|Average|Latency on an http request.|StatusCode, Authentication| |ThrottledHttpRequestCount|Yes|ThrottledHttpRequestCount|Count|Count|Throttled http requests.|No Dimensions| + ## Microsoft.AppPlatform/Spring |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod| |working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod| - ## Microsoft.Automation/automationAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|TotalUpdateDeploymentMachineRuns|Yes|Total Update Deployment Machine Runs|Count|Total|Total software update deployment machine runs in a software update deployment run|SoftwareUpdateConfigurationName, Status, TargetComputer, SoftwareUpdateConfigurationRunId| |TotalUpdateDeploymentRuns|Yes|Total Update Deployment Runs|Count|Total|Total software update deployment runs|SoftwareUpdateConfigurationName, Status| - ## Microsoft.AVS/privateClouds |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|UsageAverage|Yes|Average Memory Usage|Percent|Average|Memory usage as percentage of total configured or available memory|clustername| |UsedLatest|Yes|Datastore Disk Used|Bytes|Average|The total amount of disk used in the datastore|dsname| - ## Microsoft.Batch/batchAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
+## Microsoft.Cloudtest/hostedpools
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Provisioned|Yes|Provisioned|Count|Count|Resources that are provisioned|PoolId, SKU, Images, ProviderName|
+|Ready|Yes|Ready|Percent|Average|Resources that are ready to be used|PoolId, SKU, Images, ProviderName|
+|TotalDurationMs|Yes|TotalDurationMs|Milliseconds|Average|Average time to complete requests (ms)|PoolId, Type, ResourceRequestType, Image|
++
+## Microsoft.Cloudtest/pools
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|Provisioned|Yes|Provisioned|Count|Count|Resources that are provisioned|PoolId, SKU, Images, ProviderName|
+|Ready|Yes|Ready|Percent|Average|Resources that are ready to be used|PoolId, SKU, Images, ProviderName|
+|TotalDurationMs|Yes|TotalDurationMs|Milliseconds|Average|Average time to complete requests (ms)|PoolId, Type, ResourceRequestType, Image|
++
+## Microsoft.ClusterStor/nodes
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|TotalCapacityAvailable|No|TotalCapacityAvailable|Bytes|Average|The total capacity available in lustre file system|filesystem_name, category, system|
+|TotalCapacityUsed|No|TotalCapacityUsed|Bytes|Average|The total capacity used in lustre file system|filesystem_name, category, system|
+|TotalRead|No|TotalRead|BytesPerSecond|Average|The total lustre file system read per second|filesystem_name, category, system|
+|TotalWrite|No|TotalWrite|BytesPerSecond|Average|The total lustre file system write per second|filesystem_name, category, system|
++ ## Microsoft.CognitiveServices/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|Available Memory Bytes|Yes|Available Memory Bytes (Preview)|Bytes|Average|Amount of physical memory, in bytes, immediately available for allocation to a process or for system use in the Virtual Machine|RoleInstanceId, RoleId|
|Disk Read Bytes|Yes|Disk Read Bytes|Bytes|Total|Bytes read from disk during monitoring period|RoleInstanceId, RoleId| |Disk Read Operations/Sec|Yes|Disk Read Operations/Sec|CountPerSecond|Average|Disk Read IOPS|RoleInstanceId, RoleId| |Disk Write Bytes|Yes|Disk Write Bytes|Bytes|Total|Bytes written to disk during monitoring period|RoleInstanceId, RoleId| |Disk Write Operations/Sec|Yes|Disk Write Operations/Sec|CountPerSecond|Average|Disk Write IOPS|RoleInstanceId, RoleId|
+|Network In Total|Yes|Network In Total|Bytes|Total|The number of bytes received on all network interfaces by the Virtual Machine(s) (Incoming Traffic)|RoleInstanceId, RoleId|
+|Network Out Total|Yes|Network Out Total|Bytes|Total|The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic)|RoleInstanceId, RoleId|
|Percentage CPU|Yes|Percentage CPU|Percent|Average|The percentage of allocated compute units that are currently in use by the Virtual Machine(s)|RoleInstanceId, RoleId|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|Available Memory Bytes|Yes|Available Memory Bytes (Preview)|Bytes|Average|Amount of physical memory, in bytes, immediately available for allocation to a process or for system use in the Virtual Machine|RoleInstanceId, RoleId|
|Disk Read Bytes|Yes|Disk Read Bytes|Bytes|Total|Bytes read from disk during monitoring period|RoleInstanceId, RoleId| |Disk Read Operations/Sec|Yes|Disk Read Operations/Sec|CountPerSecond|Average|Disk Read IOPS|RoleInstanceId, RoleId| |Disk Write Bytes|Yes|Disk Write Bytes|Bytes|Total|Bytes written to disk during monitoring period|RoleInstanceId, RoleId| |Disk Write Operations/Sec|Yes|Disk Write Operations/Sec|CountPerSecond|Average|Disk Write IOPS|RoleInstanceId, RoleId|
+|Network In Total|Yes|Network In Total|Bytes|Total|The number of bytes received on all network interfaces by the Virtual Machine(s) (Incoming Traffic)|RoleInstanceId, RoleId|
+|Network Out Total|Yes|Network Out Total|Bytes|Total|The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic)|RoleInstanceId, RoleId|
|Percentage CPU|Yes|Percentage CPU|Percent|Average|The percentage of allocated compute units that are currently in use by the Virtual Machine(s)|RoleInstanceId, RoleId|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|Available Memory Bytes|Yes|Available Memory Bytes (Preview)|Bytes|Average|Amount of physical memory, in bytes, immediately available for allocation to a process or for system use in the Virtual Machine|No Dimensions|
|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs|No Dimensions| |CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on B-series burstable VMs|No Dimensions| |Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|Available Memory Bytes|Yes|Available Memory Bytes (Preview)|Bytes|Average|Amount of physical memory, in bytes, immediately available for allocation to a process or for system use in the Virtual Machine|VMName|
|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs|No Dimensions| |CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on B-series burstable VMs|No Dimensions| |Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN, VMName|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|Available Memory Bytes|Yes|Available Memory Bytes (Preview)|Bytes|Average|Amount of physical memory, in bytes, immediately available for allocation to a process or for system use in the Virtual Machine|No Dimensions|
|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs|No Dimensions| |CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on B-series burstable VMs|No Dimensions| |Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN|
For important additional information, see [Monitoring Agents Overview](../agents
|NICWriteThroughput|Yes|Write Throughput (Network)|BytesPerSecond|Average|The write throughput of the network interface on the device in the reporting period for all volumes in the gateway.|InstanceName| |TotalCapacity|Yes|Total Capacity|Bytes|Average|Total Capacity|No Dimensions| - ## Microsoft.DataCollaboration/workspaces |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|ProposalCount|Yes|Created Proposals|Count|Maximum|Number of created proposals|ProposalName| |ScriptCount|Yes|Created Scripts|Count|Maximum|Number of created scripts|ScriptName| - ## Microsoft.DataFactory/datafactories |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|TwinCount|Yes|Twin Count|Count|Total|Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you are approaching the service limit for max number of twins allowed per instance.|No Dimensions|
-## Microsoft.DocumentDB/databaseAccounts
+## Microsoft.DocumentDB/DatabaseAccounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
-|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|Region, ClosureReason|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason|
|CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
-|CassandraKeyspaceCreate|No|Cassandra Keyspace Created|Count|Count|Cassandra Keyspace Created|ResourceName, |
-|CassandraKeyspaceDelete|No|Cassandra Keyspace Deleted|Count|Count|Cassandra Keyspace Deleted|ResourceName, |
-|CassandraKeyspaceThroughputUpdate|No|Cassandra Keyspace Throughput Updated|Count|Count|Cassandra Keyspace Throughput Updated|ResourceName, |
-|CassandraKeyspaceUpdate|No|Cassandra Keyspace Updated|Count|Count|Cassandra Keyspace Updated|ResourceName, |
-|CassandraRequestCharges|No|Cassandra Request Charges|Count|Total|RUs consumed for Cassandra requests made|DatabaseName, CollectionName, Region, OperationType, ResourceType|
-|CassandraRequests|No|Cassandra Requests|Count|Count|Number of Cassandra requests made|DatabaseName, CollectionName, Region, OperationType, ResourceType, ErrorCode|
-|CassandraTableCreate|No|Cassandra Table Created|Count|Count|Cassandra Table Created|ResourceName, ChildResourceName, |
-|CassandraTableDelete|No|Cassandra Table Deleted|Count|Count|Cassandra Table Deleted|ResourceName, ChildResourceName, |
-|CassandraTableThroughputUpdate|No|Cassandra Table Throughput Updated|Count|Count|Cassandra Table Throughput Updated|ResourceName, ChildResourceName, |
-|CassandraTableUpdate|No|Cassandra Table Updated|Count|Count|Cassandra Table Updated|ResourceName, ChildResourceName, |
+|CassandraKeyspaceCreate|No|Cassandra Keyspace Created|Count|Count|Cassandra Keyspace Created|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|CassandraKeyspaceDelete|No|Cassandra Keyspace Deleted|Count|Count|Cassandra Keyspace Deleted|ResourceName, ApiKind, ApiKindResourceType, OperationType|
+|CassandraKeyspaceThroughputUpdate|No|Cassandra Keyspace Throughput Updated|Count|Count|Cassandra Keyspace Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|CassandraKeyspaceUpdate|No|Cassandra Keyspace Updated|Count|Count|Cassandra Keyspace Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|CassandraRequestCharges|No|Cassandra Request Charges|Count|Total|RUs consumed for Cassandra requests made|APIType, DatabaseName, CollectionName, Region, OperationType, ResourceType|
+|CassandraRequests|No|Cassandra Requests|Count|Count|Number of Cassandra requests made|APIType, DatabaseName, CollectionName, Region, OperationType, ResourceType, ErrorCode|
+|CassandraTableCreate|No|Cassandra Table Created|Count|Count|Cassandra Table Created|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|CassandraTableDelete|No|Cassandra Table Deleted|Count|Count|Cassandra Table Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType|
+|CassandraTableThroughputUpdate|No|Cassandra Table Throughput Updated|Count|Count|Cassandra Table Throughput Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|CassandraTableUpdate|No|Cassandra Table Updated|Count|Count|Cassandra Table Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
|CreateAccount|Yes|Account Created|Count|Count|Account Created|No Dimensions| |DataUsage|No|Data Usage|Bytes|Total|Total data usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
-|DedicatedGatewayAverageCPUUsage|No|DedicatedGatewayAverageCPUUsage|Percent|Average|Average CPU usage across dedicated gateway instances|Region, |
+|DedicatedGatewayAverageCPUUsage|No|DedicatedGatewayAverageCPUUsage|Percent|Average|Average CPU usage across dedicated gateway instances|Region, MetricType|
|DedicatedGatewayAverageMemoryUsage|No|DedicatedGatewayAverageMemoryUsage|Bytes|Average|Average memory usage across dedicated gateway instances, which is used for both routing requests and caching data|Region|
-|DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, |
+|DedicatedGatewayMaximumCPUUsage|No|DedicatedGatewayMaximumCPUUsage|Percent|Average|Average Maximum CPU usage across dedicated gateway instances|Region, MetricType|
|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region| |DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions| |DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |DocumentQuota|No|Document Quota|Bytes|Total|Total storage quota reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
-|GremlinDatabaseCreate|No|Gremlin Database Created|Count|Count|Gremlin Database Created|ResourceName, |
-|GremlinDatabaseDelete|No|Gremlin Database Deleted|Count|Count|Gremlin Database Deleted|ResourceName, |
-|GremlinDatabaseThroughputUpdate|No|Gremlin Database Throughput Updated|Count|Count|Gremlin Database Throughput Updated|ResourceName, |
-|GremlinDatabaseUpdate|No|Gremlin Database Updated|Count|Count|Gremlin Database Updated|ResourceName, |
-|GremlinGraphCreate|No|Gremlin Graph Created|Count|Count|Gremlin Graph Created|ResourceName, ChildResourceName, |
-|GremlinGraphDelete|No|Gremlin Graph Deleted|Count|Count|Gremlin Graph Deleted|ResourceName, ChildResourceName, |
-|GremlinGraphThroughputUpdate|No|Gremlin Graph Throughput Updated|Count|Count|Gremlin Graph Throughput Updated|ResourceName, ChildResourceName, |
-|GremlinGraphUpdate|No|Gremlin Graph Updated|Count|Count|Gremlin Graph Updated|ResourceName, ChildResourceName, |
+|GremlinDatabaseCreate|No|Gremlin Database Created|Count|Count|Gremlin Database Created|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|GremlinDatabaseDelete|No|Gremlin Database Deleted|Count|Count|Gremlin Database Deleted|ResourceName, ApiKind, ApiKindResourceType, OperationType|
+|GremlinDatabaseThroughputUpdate|No|Gremlin Database Throughput Updated|Count|Count|Gremlin Database Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|GremlinDatabaseUpdate|No|Gremlin Database Updated|Count|Count|Gremlin Database Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|GremlinGraphCreate|No|Gremlin Graph Created|Count|Count|Gremlin Graph Created|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|GremlinGraphDelete|No|Gremlin Graph Deleted|Count|Count|Gremlin Graph Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType|
+|GremlinGraphThroughputUpdate|No|Gremlin Graph Throughput Updated|Count|Count|Gremlin Graph Throughput Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|GremlinGraphUpdate|No|Gremlin Graph Updated|Count|Count|Gremlin Graph Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
|IndexUsage|No|Index Usage|Bytes|Total|Total index usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |IntegratedCacheEvictedEntriesSize|No|IntegratedCacheEvictedEntriesSize|Bytes|Average|Size of the entries evicted from the integrated cache|Region|
-|IntegratedCacheItemExpirationCount|No|IntegratedCacheItemExpirationCount|Count|Average|Number of items evicted from the integrated cache due to TTL expiration|Region, |
-|IntegratedCacheItemHitRate|No|IntegratedCacheItemHitRate|Percent|Average|Number of point reads that used the integrated cache divided by number of point reads routed through the dedicated gateway with eventual consistency|Region, |
-|IntegratedCacheQueryExpirationCount|No|IntegratedCacheQueryExpirationCount|Count|Average|Number of queries evicted from the integrated cache due to TTL expiration|Region, |
-|IntegratedCacheQueryHitRate|No|IntegratedCacheQueryHitRate|Percent|Average|Number of queries that used the integrated cache divided by number of queries routed through the dedicated gateway with eventual consistency|Region, |
-|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
-|MongoCollectionCreate|No|Mongo Collection Created|Count|Count|Mongo Collection Created|ResourceName, ChildResourceName, |
-|MongoCollectionDelete|No|Mongo Collection Deleted|Count|Count|Mongo Collection Deleted|ResourceName, ChildResourceName, |
-|MongoCollectionThroughputUpdate|No|Mongo Collection Throughput Updated|Count|Count|Mongo Collection Throughput Updated|ResourceName, ChildResourceName, |
-|MongoCollectionUpdate|No|Mongo Collection Updated|Count|Count|Mongo Collection Updated|ResourceName, ChildResourceName, |
-|MongoDatabaseDelete|No|Mongo Database Deleted|Count|Count|Mongo Database Deleted|ResourceName, |
-|MongoDatabaseThroughputUpdate|No|Mongo Database Throughput Updated|Count|Count|Mongo Database Throughput Updated|ResourceName, |
-|MongoDBDatabaseCreate|No|Mongo Database Created|Count|Count|Mongo Database Created|ResourceName, |
-|MongoDBDatabaseUpdate|No|Mongo Database Updated|Count|Count|Mongo Database Updated|ResourceName, |
+|IntegratedCacheItemExpirationCount|No|IntegratedCacheItemExpirationCount|Count|Average|Number of items evicted from the integrated cache due to TTL expiration|Region, CacheEntryType|
+|IntegratedCacheItemHitRate|No|IntegratedCacheItemHitRate|Percent|Average|Number of point reads that used the integrated cache divided by number of point reads routed through the dedicated gateway with eventual consistency|Region, CacheEntryType|
+|IntegratedCacheQueryExpirationCount|No|IntegratedCacheQueryExpirationCount|Count|Average|Number of queries evicted from the integrated cache due to TTL expiration|Region, CacheEntryType|
+|IntegratedCacheQueryHitRate|No|IntegratedCacheQueryHitRate|Percent|Average|Number of queries that used the integrated cache divided by number of queries routed through the dedicated gateway with eventual consistency|Region, CacheEntryType|
+|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, Role|
+|MongoCollectionCreate|No|Mongo Collection Created|Count|Count|Mongo Collection Created|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|MongoCollectionDelete|No|Mongo Collection Deleted|Count|Count|Mongo Collection Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType|
+|MongoCollectionThroughputUpdate|No|Mongo Collection Throughput Updated|Count|Count|Mongo Collection Throughput Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|MongoCollectionUpdate|No|Mongo Collection Updated|Count|Count|Mongo Collection Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|MongoDatabaseDelete|No|Mongo Database Deleted|Count|Count|Mongo Database Deleted|ResourceName, ApiKind, ApiKindResourceType, OperationType|
+|MongoDatabaseThroughputUpdate|No|Mongo Database Throughput Updated|Count|Count|Mongo Database Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|MongoDBDatabaseCreate|No|Mongo Database Created|Count|Count|Mongo Database Created|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|MongoDBDatabaseUpdate|No|Mongo Database Updated|Count|Count|Mongo Database Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
|MongoRequestCharge|Yes|Mongo Request Charge|Count|Total|Mongo Request Units Consumed|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status| |MongoRequests|Yes|Mongo Requests|Count|Count|Number of Mongo Requests Made|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status|
-|MongoRequestsCount|No|(deprecated) Mongo Request Rate|CountPerSecond|Average|Mongo request Count per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsDelete|No|(deprecated) Mongo Delete Request Rate|CountPerSecond|Average|Mongo Delete request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsInsert|No|(deprecated) Mongo Insert Request Rate|CountPerSecond|Average|Mongo Insert count per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsQuery|No|(deprecated) Mongo Query Request Rate|CountPerSecond|Average|Mongo Query request per second|DatabaseName, CollectionName, Region, ErrorCode|
-|MongoRequestsUpdate|No|(deprecated) Mongo Update Request Rate|CountPerSecond|Average|Mongo Update request per second|DatabaseName, CollectionName, Region, ErrorCode|
|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId| |ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|ReplicationLatency|Yes|P99 Replication Latency|MilliSeconds|Average|P99 Replication Latency across source and target regions for geo-enabled account|SourceRegion, TargetRegion| |ServerSideLatency|No|Server Side Latency|MilliSeconds|Average|Server Side Latency|DatabaseName, CollectionName, Region, ConnectionMode, OperationType, PublicAPIType| |ServiceAvailability|No|Service Availability|Percent|Average|Account requests availability at one hour, day or month granularity|No Dimensions|
-|SqlContainerCreate|No|Sql Container Created|Count|Count|Sql Container Created|ResourceName, ChildResourceName, |
-|SqlContainerDelete|No|Sql Container Deleted|Count|Count|Sql Container Deleted|ResourceName, ChildResourceName, |
-|SqlContainerThroughputUpdate|No|Sql Container Throughput Updated|Count|Count|Sql Container Throughput Updated|ResourceName, ChildResourceName, |
-|SqlContainerUpdate|No|Sql Container Updated|Count|Count|Sql Container Updated|ResourceName, ChildResourceName, |
-|SqlDatabaseCreate|No|Sql Database Created|Count|Count|Sql Database Created|ResourceName, |
-|SqlDatabaseDelete|No|Sql Database Deleted|Count|Count|Sql Database Deleted|ResourceName, |
-|SqlDatabaseThroughputUpdate|No|Sql Database Throughput Updated|Count|Count|Sql Database Throughput Updated|ResourceName, |
-|SqlDatabaseUpdate|No|Sql Database Updated|Count|Count|Sql Database Updated|ResourceName, |
-|TableTableCreate|No|AzureTable Table Created|Count|Count|AzureTable Table Created|ResourceName, |
-|TableTableDelete|No|AzureTable Table Deleted|Count|Count|AzureTable Table Deleted|ResourceName, |
-|TableTableThroughputUpdate|No|AzureTable Table Throughput Updated|Count|Count|AzureTable Table Throughput Updated|ResourceName, |
-|TableTableUpdate|No|AzureTable Table Updated|Count|Count|AzureTable Table Updated|ResourceName, |
+|SqlContainerCreate|No|Sql Container Created|Count|Count|Sql Container Created|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|SqlContainerDelete|No|Sql Container Deleted|Count|Count|Sql Container Deleted|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, OperationType|
+|SqlContainerThroughputUpdate|No|Sql Container Throughput Updated|Count|Count|Sql Container Throughput Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|SqlContainerUpdate|No|Sql Container Updated|Count|Count|Sql Container Updated|ResourceName, ChildResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|SqlDatabaseCreate|No|Sql Database Created|Count|Count|Sql Database Created|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|SqlDatabaseDelete|No|Sql Database Deleted|Count|Count|Sql Database Deleted|ResourceName, ApiKind, ApiKindResourceType, OperationType|
+|SqlDatabaseThroughputUpdate|No|Sql Database Throughput Updated|Count|Count|Sql Database Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|SqlDatabaseUpdate|No|Sql Database Updated|Count|Count|Sql Database Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|TableTableCreate|No|AzureTable Table Created|Count|Count|AzureTable Table Created|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
+|TableTableDelete|No|AzureTable Table Deleted|Count|Count|AzureTable Table Deleted|ResourceName, ApiKind, ApiKindResourceType, OperationType|
+|TableTableThroughputUpdate|No|AzureTable Table Throughput Updated|Count|Count|AzureTable Table Throughput Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest|
+|TableTableUpdate|No|AzureTable Table Updated|Count|Count|AzureTable Table Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType|
|TotalRequests|Yes|Total Requests|Count|Count|Number of requests made|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status| |TotalRequestUnits|Yes|Total Request Units|Count|Total|Request Units consumed|DatabaseName, CollectionName, Region, StatusCode, OperationType, Status| |UpdateAccountKeys|Yes|Account Keys Updated|Count|Count|Account Keys Updated|KeyType|
For important additional information, see [Monitoring Agents Overview](../agents
|KafkaRestProxy.ProducerRequestTime.p95|Yes|REST proxy Producer RequestLatency|Milliseconds|Average|Message latency in a producer request through Kafka REST proxy|Machine, Topic| |KafkaRestProxy.ProducerRequestWaitingInQueueTime.p95|Yes|REST proxy Producer Request Backlog|Milliseconds|Average|Producer REST proxy queue length|Machine, Topic| |NumActiveWorkers|Yes|Number of Active Workers|Count|Maximum|Number of Active Workers|MetricName|
+|PendingCPU|Yes|Pending CPU|Count|Maximum|Pending CPU Requests in YARN|No Dimensions|
+|PendingMemory|Yes|Pending Memory|Count|Maximum|Pending Memory Requests in YARN|No Dimensions|
## Microsoft.HealthcareApis/services
For important additional information, see [Monitoring Agents Overview](../agents
|TotalNumberOfThrottledQueries|Yes|Total number of throttled queries|Count|Maximum|Total number of throttled queries|No Dimensions|
-## Microsoft.Logic/integrationServiceEnvironments
+## Microsoft.Logic/IntegrationServiceEnvironments
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
For important additional information, see [Monitoring Agents Overview](../agents
|ActionsStarted|Yes|Actions Started |Count|Total|Number of workflow actions started.|No Dimensions| |ActionsSucceeded|Yes|Actions Succeeded |Count|Total|Number of workflow actions succeeded.|No Dimensions| |ActionSuccessLatency|Yes|Action Success Latency |Seconds|Average|Latency of succeeded workflow actions.|No Dimensions|
-|ActionThrottledEvents|Yes|Action Throttled Events|Count|Total|Number of workflow action throttled events..|No Dimensions|
|IntegrationServiceEnvironmentConnectorMemoryUsage|Yes|Connector Memory Usage for Integration Service Environment|Percent|Average|Connector memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentConnectorProcessorUsage|Yes|Connector Processor Usage for Integration Service Environment|Percent|Average|Connector processor usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowMemoryUsage|Yes|Workflow Memory Usage for Integration Service Environment|Percent|Average|Workflow memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowProcessorUsage|Yes|Workflow Processor Usage for Integration Service Environment|Percent|Average|Workflow processor usage for integration service environment.|No Dimensions|
-|RunFailurePercentage|Yes|Run Failure Percentage|Percent|Total|Percentage of workflow runs failed.|No Dimensions|
|RunLatency|Yes|Run Latency|Seconds|Average|Latency of completed workflow runs.|No Dimensions| |RunsCancelled|Yes|Runs Cancelled|Count|Total|Number of workflow runs cancelled.|No Dimensions| |RunsCompleted|Yes|Runs Completed|Count|Total|Number of workflow runs completed.|No Dimensions| |RunsFailed|Yes|Runs Failed|Count|Total|Number of workflow runs failed.|No Dimensions| |RunsStarted|Yes|Runs Started|Count|Total|Number of workflow runs started.|No Dimensions| |RunsSucceeded|Yes|Runs Succeeded|Count|Total|Number of workflow runs succeeded.|No Dimensions|
-|RunStartThrottledEvents|Yes|Run Start Throttled Events|Count|Total|Number of workflow run start throttled events.|No Dimensions|
|RunSuccessLatency|Yes|Run Success Latency|Seconds|Average|Latency of succeeded workflow runs.|No Dimensions|
-|RunThrottledEvents|Yes|Run Throttled Events|Count|Total|Number of workflow action or trigger throttled events.|No Dimensions|
|TriggerFireLatency|Yes|Trigger Fire Latency |Seconds|Average|Latency of fired workflow triggers.|No Dimensions| |TriggerLatency|Yes|Trigger Latency |Seconds|Average|Latency of completed workflow triggers.|No Dimensions| |TriggersCompleted|Yes|Triggers Completed |Count|Total|Number of workflow triggers completed.|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|TriggersStarted|Yes|Triggers Started |Count|Total|Number of workflow triggers started.|No Dimensions| |TriggersSucceeded|Yes|Triggers Succeeded |Count|Total|Number of workflow triggers succeeded.|No Dimensions| |TriggerSuccessLatency|Yes|Trigger Success Latency |Seconds|Average|Latency of succeeded workflow triggers.|No Dimensions|
-|TriggerThrottledEvents|Yes|Trigger Throttled Events|Count|Total|Number of workflow trigger throttled events.|No Dimensions|
-## Microsoft.Logic/workflows
+## Microsoft.Logic/Workflows
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
For important additional information, see [Monitoring Agents Overview](../agents
|Failed Runs|Yes|Failed Runs|Count|Total|Number of runs failed for this workspace. Count is updated when a run fails.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Finalizing Runs|Yes|Finalizing Runs|Count|Total|Number of runs entered finalizing state for this workspace. Count is updated when a run has completed but output collection still in progress.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |GpuCapacityMilliGPUs|Yes|GpuCapacityMilliGPUs|Count|Average|Maximum capacity of a GPU device in milli-GPUs. Capacity is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName|
-|GpuEnergyJoules|Yes|GpuEnergyJoules|Count|Total|Interval energy in Joules on a GPU node. Energy is reported at one minute intervals.|Scenario, runId, rootRunId, NodeId, DeviceId, ClusterName|
+|GpuEnergyJoules|Yes|GpuEnergyJoules|Count|Total|Interval energy in Joules on a GPU node. Energy is reported at one minute intervals.|Scenario, runId, rootRunId, InstanceId, DeviceId, ComputeName|
|GpuMemoryCapacityMegabytes|Yes|GpuMemoryCapacityMegabytes|Count|Average|Maximum memory capacity of a GPU device in megabytes. Capacity aggregated in at one minute intervals.|RunId, InstanceId, DeviceId, ComputeName| |GpuMemoryUtilization|Yes|GpuMemoryUtilization|Count|Average|Percentage of memory utilization on a GPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, DeviceId, ClusterName| |GpuMemoryUtilizationMegabytes|Yes|GpuMemoryUtilizationMegabytes|Count|Average|Memory utilization of a GPU device in megabytes. Utilization aggregated in at one minute intervals.|RunId, InstanceId, DeviceId, ComputeName|
For important additional information, see [Monitoring Agents Overview](../agents
|ContentKeyPolicyCount|Yes|Content Key Policy count|Count|Average|How many content key policies are already created in current media service account|No Dimensions| |ContentKeyPolicyQuota|Yes|Content Key Policy quota|Count|Average|How many content key polices are allowed for current media service account|No Dimensions| |ContentKeyPolicyQuotaUsedPercentage|Yes|Content Key Policy quota used percentage|Percent|Average|Content Key Policy used percentage in current media service account|No Dimensions|
+|JobsScheduled|Yes|Jobs Scheduled|Count|Average|The number of Jobs in the Scheduled state. Counts on this metric only reflect jobs submitted through the v3 API. Jobs submitted through the v2 (Legacy) API are not counted.|No Dimensions|
|MaxChannelsAndLiveEventsCount|Yes|Max live event quota|Count|Average|The maximum number of live events allowed in the current media services account|No Dimensions| |MaxRunningChannelsAndLiveEventsCount|Yes|Max running live event quota|Count|Average|The maximum number of running live events allowed in the current media services account|No Dimensions| |RunningChannelsAndLiveEventsCount|Yes|Running live event count|Count|Average|The total number of running live events in the current media services account|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|pingmesh|No|Bastion Communication Status|Count|Average|Communication status shows 1 if all communication is good and 0 if its bad.|No Dimensions| |sessions|No|Session Count|Count|Total|Sessions Count for the Bastion. View in sum and per instance.|host| |total|Yes|Total Memory|Count|Average|Total memory stats.|host|
-|usage_user|No|Used CPU|Count|Average|CPU Usage stats.|cpu, host|
-|used|Yes|Used Memory|Count|Average|Memory Usage stats.|host|
+|usage_user|No|CPU Usage|Count|Average|CPU Usage stats.|cpu, host|
+|used|Yes|Memory Usage|Count|Average|Memory Usage stats.|host|
## Microsoft.Network/connections
For important additional information, see [Monitoring Agents Overview](../agents
|||||||| |ArpAvailability|Yes|Arp Availability|Percent|Average|ARP Availability from MSEE towards all peers.|PeeringType, Peer| |BgpAvailability|Yes|Bgp Availability|Percent|Average|BGP Availability from MSEE towards all peers.|PeeringType, Peer|
-|BitsInPerSecond|No|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|PeeringType, DeviceRole|
-|BitsOutPerSecond|No|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|PeeringType, DeviceRole|
+|BitsInPerSecond|Yes|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|PeeringType, DeviceRole|
+|BitsOutPerSecond|Yes|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|PeeringType, DeviceRole|
|GlobalReachBitsInPerSecond|No|GlobalReachBitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|PeeredCircuitSKey| |GlobalReachBitsOutPerSecond|No|GlobalReachBitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|PeeredCircuitSKey| |QosDropBitsInPerSecond|Yes|DroppedInBitsPerSecond|BitsPerSecond|Average|Ingress bits of data dropped per second|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|||||||| |AdminState|Yes|AdminState|Count|Average|Admin state of the port|Link| |LineProtocol|Yes|LineProtocol|Count|Average|Line protocol status of the port|Link|
-|PortBitsInPerSecond|Yes|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|Link|
-|PortBitsOutPerSecond|Yes|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|Link|
+|PortBitsInPerSecond|No|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|Link|
+|PortBitsOutPerSecond|No|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|Link|
|RxLightLevel|Yes|RxLightLevel|Count|Average|Rx Light level in dBm|Link, Lane| |TxLightLevel|Yes|TxLightLevel|Count|Average|Tx light level in dBm|Link, Lane|
For important additional information, see [Monitoring Agents Overview](../agents
|TestResult|Yes|Test Result|Count|Average|Connection monitor test result|SourceAddress, SourceName, SourceResourceId, SourceType, Protocol, DestinationAddress, DestinationName, DestinationResourceId, DestinationType, DestinationPort, TestGroupName, TestConfigurationName, TestResultCriterion, SourceIP, DestinationIP, SourceSubnet, DestinationSubnet|
-## Microsoft.Network/p2sVpnGateways
+## microsoft.network/p2svpngateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|No Dimensions|
-|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|BytesPerSecond|Average|Point-to-site connection count of a gateway|Protocol, Instance|
## Microsoft.Network/privateDnsZones
For important additional information, see [Monitoring Agents Overview](../agents
|QpsByEndpoint|Yes|Queries by Endpoint Returned|Count|Total|Number of times a Traffic Manager endpoint was returned in the given time frame|EndpointName|
-## Microsoft.Network/virtualNetworkGateways
+## Microsoft.Network/virtualHubs
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|No Dimensions|
+|BgpPeerStatus|No|Bgp Peer Status|Count|Maximum|1 - Connected, 0 - Not connected|routeserviceinstance, bgppeerip, bgppeertype|
+|CountOfRoutesAdvertisedToPeer|No|Count Of Routes Advertised To Peer|Count|Maximum|Total number of routes advertised to peer|routeserviceinstance, bgppeerip, bgppeertype|
+|CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype|
++
+## microsoft.network/virtualnetworkgateways
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
+|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
+|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance| |ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance| |ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network(Preview)|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
+|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network (Preview)|Count|Maximum|Number of VMs in the Virtual Network|roleInstance|
|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|No Dimensions|
-|P2SConnectionCount|Yes|P2S Connection Count|Count|Maximum|Point-to-site connection count of a gateway|Protocol|
-|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP|
-|TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP|
-|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP|
-|TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP|
-|TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP|
-|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP|
-|TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP|
-|TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP|
-|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType|
-|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
-|TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|BytesPerSecond|Average|Point-to-site connection count of a gateway|Protocol, Instance|
+|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance|
+|TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
+|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
+|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
+|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.Network/virtualNetworks
For important additional information, see [Monitoring Agents Overview](../agents
|PeeringAvailability|Yes|Bgp Availability|Percent|Average|BGP Availability between VirtualRouter and remote peers|Peer|
-## Microsoft.Network/vpnGateways
+## microsoft.network/vpngateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|No Dimensions|
-|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP|
-|TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP|
-|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP|
-|TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP|
-|TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP|
-|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP|
-|TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP|
-|TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP|
-|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType|
-|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
-|TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance|
+|TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
## Microsoft.NotificationHubs/Namespaces/NotificationHubs
For important additional information, see [Monitoring Agents Overview](../agents
|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat|Computer, OSType, Version, SourceComputerId| |Update|Yes|Update|Count|Average|Update|Computer, Product, Classification, UpdateState, Optional, Approved| - ## Microsoft.Peering/peerings |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|||||||| |PrefixLatency|Yes|Prefix Latency|Milliseconds|Average|Median prefix latency|PrefixName| - ## Microsoft.PowerBIDedicated/capacities |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |cpu_metric|Yes|CPU (Gen2)|Percent|Average|CPU Utilization. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions|
+|cpu_workload_metric|Yes|CPU Per Workload (Gen2)|Percent|Average|CPU Utilization Per Workload. Supported only for Power BI Embedded Generation 2 resources.|Workload|
|memory_metric|Yes|Memory (Gen1)|Bytes|Average|Memory. Range 0-3 GB for A1, 0-5 GB for A2, 0-10 GB for A3, 0-25 GB for A4, 0-50 GB for A5 and 0-100 GB for A6. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions| |memory_thrashing_metric|Yes|Memory Thrashing (Datasets) (Gen1)|Percent|Average|Average memory thrashing. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions| |overload_metric|Yes|Overload (Gen2)|Count|Average|Resource Overload, 1 if resource is overloaded, otherwise 0. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions|
-|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
-|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
+|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
+|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The number of user connections established to the service. It is aggregated by adding all the online connections.|No Dimensions|
## Microsoft.Sql/managedInstances
For important additional information, see [Monitoring Agents Overview](../agents
|virtual_core_count|Yes|Virtual core count|Count|Average|Virtual core count|No Dimensions|
+## Microsoft.Sql/servers/elasticPools
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|allocated_data_storage|Yes|Data space allocated|Bytes|Average|Data space allocated|No Dimensions|
+|allocated_data_storage_percent|Yes|Data space allocated percent|Percent|Maximum|Data space allocated percent|No Dimensions|
+|cpu_limit|Yes|CPU limit|Count|Average|CPU limit. Applies to vCore-based elastic pools.|No Dimensions|
+|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions|
+|cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based elastic pools.|No Dimensions|
+|database_allocated_data_storage|No|Data space allocated|Bytes|Average|Data space allocated|DatabaseResourceId|
+|database_cpu_limit|No|CPU limit|Count|Average|CPU limit|DatabaseResourceId|
+|database_cpu_percent|No|CPU percentage|Percent|Average|CPU percentage|DatabaseResourceId|
+|database_cpu_used|No|CPU used|Count|Average|CPU used|DatabaseResourceId|
+|database_dtu_consumption_percent|No|DTU percentage|Percent|Average|DTU percentage|DatabaseResourceId|
+|database_eDTU_used|No|eDTU used|Count|Average|eDTU used|DatabaseResourceId|
+|database_log_write_percent|No|Log IO percentage|Percent|Average|Log IO percentage|DatabaseResourceId|
+|database_physical_data_read_percent|No|Data IO percentage|Percent|Average|Data IO percentage|DatabaseResourceId|
+|database_sessions_percent|No|Sessions percentage|Percent|Average|Sessions percentage|DatabaseResourceId|
+|database_storage_used|No|Data space used|Bytes|Average|Data space used|DatabaseResourceId|
+|database_workers_percent|No|Workers percentage|Percent|Average|Workers percentage|DatabaseResourceId|
+|dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based elastic pools.|No Dimensions|
+|eDTU_limit|Yes|eDTU limit|Count|Average|eDTU limit. Applies to DTU-based elastic pools.|No Dimensions|
+|eDTU_used|Yes|eDTU used|Count|Average|eDTU used. Applies to DTU-based elastic pools.|No Dimensions|
+|log_write_percent|Yes|Log IO percentage|Percent|Average|Log IO percentage|No Dimensions|
+|physical_data_read_percent|Yes|Data IO percentage|Percent|Average|Data IO percentage|No Dimensions|
+|sessions_percent|Yes|Sessions percentage|Percent|Average|Sessions percentage|No Dimensions|
+|sqlserver_process_core_percent|Yes|SQL Server process core percent|Percent|Maximum|CPU usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
+|sqlserver_process_memory_percent|Yes|SQL Server process memory percent|Percent|Maximum|Memory usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
+|storage_limit|Yes|Data max size|Bytes|Average|Data max size|No Dimensions|
+|storage_percent|Yes|Data space used percent|Percent|Average|Data space used percent|No Dimensions|
+|storage_used|Yes|Data space used|Bytes|Average|Data space used|No Dimensions|
+|tempdb_data_size|Yes|Tempdb Data File Size Kilobytes|Count|Maximum|Space used in tempdb data files in kilobytes.|No Dimensions|
+|tempdb_log_size|Yes|Tempdb Log File Size Kilobytes|Count|Maximum|Space used in tempdb transaction log file in kilobytes.|No Dimensions|
+|tempdb_log_used_percent|Yes|Tempdb Percent Log Used|Percent|Maximum|Space used percentage in tempdb transaction log file|No Dimensions|
+|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage|No Dimensions|
+|xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent|No Dimensions|
+ ## Microsoft.Sql/servers/databases |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage. Not applicable to data warehouses.|No Dimensions| |xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent. Not applicable to data warehouses.|No Dimensions|
-## Microsoft.Sql/servers/elasticPools
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|allocated_data_storage|Yes|Data space allocated|Bytes|Average|Data space allocated|No Dimensions|
-|allocated_data_storage_percent|Yes|Data space allocated percent|Percent|Maximum|Data space allocated percent|No Dimensions|
-|cpu_limit|Yes|CPU limit|Count|Average|CPU limit. Applies to vCore-based elastic pools.|No Dimensions|
-|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions|
-|cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based elastic pools.|No Dimensions|
-|database_allocated_data_storage|No|Data space allocated|Bytes|Average|Data space allocated|DatabaseResourceId|
-|database_cpu_limit|No|CPU limit|Count|Average|CPU limit|DatabaseResourceId|
-|database_cpu_percent|No|CPU percentage|Percent|Average|CPU percentage|DatabaseResourceId|
-|database_cpu_used|No|CPU used|Count|Average|CPU used|DatabaseResourceId|
-|database_dtu_consumption_percent|No|DTU percentage|Percent|Average|DTU percentage|DatabaseResourceId|
-|database_eDTU_used|No|eDTU used|Count|Average|eDTU used|DatabaseResourceId|
-|database_log_write_percent|No|Log IO percentage|Percent|Average|Log IO percentage|DatabaseResourceId|
-|database_physical_data_read_percent|No|Data IO percentage|Percent|Average|Data IO percentage|DatabaseResourceId|
-|database_sessions_percent|No|Sessions percentage|Percent|Average|Sessions percentage|DatabaseResourceId|
-|database_storage_used|No|Data space used|Bytes|Average|Data space used|DatabaseResourceId|
-|database_workers_percent|No|Workers percentage|Percent|Average|Workers percentage|DatabaseResourceId|
-|dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based elastic pools.|No Dimensions|
-|eDTU_limit|Yes|eDTU limit|Count|Average|eDTU limit. Applies to DTU-based elastic pools.|No Dimensions|
-|eDTU_used|Yes|eDTU used|Count|Average|eDTU used. Applies to DTU-based elastic pools.|No Dimensions|
-|log_write_percent|Yes|Log IO percentage|Percent|Average|Log IO percentage|No Dimensions|
-|physical_data_read_percent|Yes|Data IO percentage|Percent|Average|Data IO percentage|No Dimensions|
-|sessions_percent|Yes|Sessions percentage|Percent|Average|Sessions percentage|No Dimensions|
-|sqlserver_process_core_percent|Yes|SQL Server process core percent|Percent|Maximum|CPU usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
-|sqlserver_process_memory_percent|Yes|SQL Server process memory percent|Percent|Maximum|Memory usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
-|storage_limit|Yes|Data max size|Bytes|Average|Data max size|No Dimensions|
-|storage_percent|Yes|Data space used percent|Percent|Average|Data space used percent|No Dimensions|
-|storage_used|Yes|Data space used|Bytes|Average|Data space used|No Dimensions|
-|tempdb_data_size|Yes|Tempdb Data File Size Kilobytes|Count|Maximum|Space used in tempdb data files in kilobytes.|No Dimensions|
-|tempdb_log_size|Yes|Tempdb Log File Size Kilobytes|Count|Maximum|Space used in tempdb transaction log file in kilobytes.|No Dimensions|
-|tempdb_log_used_percent|Yes|Tempdb Percent Log Used|Percent|Maximum|Space used percentage in tempdb transaction log file|No Dimensions|
-|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage|No Dimensions|
-|xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent|No Dimensions|
- ## Microsoft.Storage/storageAccounts
For important additional information, see [Monitoring Agents Overview](../agents
|FileCount|No|File Count|Count|Average|The number of files in the storage account.|FileShare| |FileShareCapacityQuota|No|File Share Capacity Quota|Bytes|Average|The upper limit on the amount of storage that can be used by Azure Files Service in bytes.|FileShare| |FileShareCount|No|File Share Count|Count|Average|The number of file shares in the storage account.|No Dimensions|
-|FileShareProvisionedIOPS|No|File Share Provisioned IOPS|Bytes|Average|The baseline number of provisioned IOPS for the premium file share in the premium files storage account. This number is calculated based on the provisioned size (quota) of the share capacity.|FileShare|
+|FileShareProvisionedIOPS|No|File Share Provisioned IOPS|CountPerSecond|Average|The baseline number of provisioned IOPS for the premium file share in the premium files storage account. This number is calculated based on the provisioned size (quota) of the share capacity.|FileShare|
|FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage accountΓÇÖs Files Service.|FileShare| |FileShareSnapshotSize|No|File Share Snapshot Size|Bytes|Average|The amount of storage used by the snapshots in storage accountΓÇÖs File service in bytes.|FileShare| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ClientIOPS|Yes|Total Client IOPS|Count|Average|The rate of client file operations processed by the Cache.|No Dimensions|
-|ClientLatency|Yes|Average Client Latency|Milliseconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
+|ClientLatency|Yes|Average Client Latency|MilliSeconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
|ClientLockIOPS|Yes|Client Lock IOPS|CountPerSecond|Average|Client file locking operations per second.|No Dimensions| |ClientMetadataReadIOPS|Yes|Client Metadata Read IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data reads, that do not modify persistent state.|No Dimensions| |ClientMetadataWriteIOPS|Yes|Client Metadata Write IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data writes, that modify persistent state.|No Dimensions| |ClientReadIOPS|Yes|Client Read IOPS|CountPerSecond|Average|Client read operations per second.|No Dimensions| |ClientReadThroughput|Yes|Average Cache Read Throughput|BytesPerSecond|Average|Client read data transfer rate.|No Dimensions|
+|ClientStatus|Yes|Client Status|Count|Total|Client connection information.|ClientSource, CacheAddress, ClientAddress, Protocol, ConnectionType|
|ClientWriteIOPS|Yes|Client Write IOPS|CountPerSecond|Average|Client write operations per second.|No Dimensions| |ClientWriteThroughput|Yes|Average Cache Write Throughput|BytesPerSecond|Average|Client write data transfer rate.|No Dimensions|
+|FileOps|Yes|File Operations|CountPerSecond|Average|Number of file operations per second.|SourceFile, Rank, FileType|
+|FileReads|Yes|File Reads|BytesPerSecond|Average|Number of bytes per second read from a file.|SourceFile, Rank, FileType|
+|FileUpdates|Yes|File Updates|CountPerSecond|Average|Number of directory updates and metadata operations per second.|SourceFile, Rank, FileType|
+|FileWrites|Yes|File Writes|BytesPerSecond|Average|Number of bytes per second written to a file.|SourceFile, Rank, FileType|
|StorageTargetAsyncWriteThroughput|Yes|StorageTarget Asynchronous Write Throughput|BytesPerSecond|Average|The rate the Cache asynchronously writes data to a particular StorageTarget. These are opportunistic writes that do not cause clients to block.|StorageTarget|
+|StorageTargetBlocksRecycled|Yes|Storage Target Blocks Recycled|Count|Average|Total number of 16k cache blocks recycled (freed) per Storage Target.|StorageTarget|
|StorageTargetFillThroughput|Yes|StorageTarget Fill Throughput|BytesPerSecond|Average|The rate the Cache reads data from the StorageTarget to handle a cache miss.|StorageTarget|
+|StorageTargetFreeReadSpace|Yes|Storage Target Free Read Space|Bytes|Average|Read space available for caching files associated with a storage target.|StorageTarget|
+|StorageTargetFreeWriteSpace|Yes|Storage Target Free Write Space|Bytes|Average|Write space available for dirty data associated with a storage target.|StorageTarget|
|StorageTargetHealth|Yes|Storage Target Health|Count|Average|Boolean results of connectivity test between the Cache and Storage Targets.|No Dimensions| |StorageTargetIOPS|Yes|Total StorageTarget IOPS|Count|Average|The rate of all file operations the Cache sends to a particular StorageTarget.|StorageTarget|
-|StorageTargetLatency|Yes|StorageTarget Latency|Milliseconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
+|StorageTargetLatency|Yes|StorageTarget Latency|MilliSeconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
|StorageTargetMetadataReadIOPS|Yes|StorageTarget Metadata Read IOPS|CountPerSecond|Average|The rate of file operations that do not modify persistent state, and excluding the read operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetMetadataWriteIOPS|Yes|StorageTarget Metadata Write IOPS|CountPerSecond|Average|The rate of file operations that do modify persistent state and excluding the write operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetReadAheadThroughput|Yes|StorageTarget Read Ahead Throughput|BytesPerSecond|Average|The rate the Cache opportunisticly reads data from the StorageTarget.|StorageTarget| |StorageTargetReadIOPS|Yes|StorageTarget Read IOPS|CountPerSecond|Average|The rate of file read operations the Cache sends to a particular StorageTarget.|StorageTarget|
+|StorageTargetRecycleRate|Yes|Storage Target Recycle Rate|BytesPerSecond|Average|Cache space recycle rate associated with a storage target in the HPC Cache. This is the rate at which existing data is cleared from the cache to make room for new data.|StorageTarget|
|StorageTargetSyncWriteThroughput|Yes|StorageTarget Synchronous Write Throughput|BytesPerSecond|Average|The rate the Cache synchronously writes data to a particular StorageTarget. These are writes that do cause clients to block.|StorageTarget| |StorageTargetTotalReadThroughput|Yes|StorageTarget Total Read Throughput|BytesPerSecond|Average|The total rate that the Cache reads data from a particular StorageTarget.|StorageTarget| |StorageTargetTotalWriteThroughput|Yes|StorageTarget Total Write Throughput|BytesPerSecond|Average|The total rate that the Cache writes data to a particular StorageTarget.|StorageTarget|
+|StorageTargetUsedReadSpace|Yes|Storage Target Used Read Space|Bytes|Average|Read space used by cached files associated with a storage target.|StorageTarget|
+|StorageTargetUsedWriteSpace|Yes|Storage Target Used Write Space|Bytes|Average|Write space used by dirty data associated with a storage target.|StorageTarget|
|StorageTargetWriteIOPS|Yes|StorageTarget Write IOPS|Count|Average|The rate of the file write operations the Cache sends to a particular StorageTarget.|StorageTarget|
+|TotalBlocksRecycled|Yes|Total Blocks Recycled|Count|Average|Total number of 16k cache blocks recycled (freed) for the HPC Cache.|No Dimensions|
+|TotalFreeReadSpace|Yes|Free Read Space|Bytes|Average|Total space available for caching read files.|No Dimensions|
+|TotalFreeWriteSpace|Yes|Free Write Read Space|Bytes|Average|Total write space available to store changed data in the cache.|No Dimensions|
+|TotalRecycleRate|Yes|Recycle Rate|BytesPerSecond|Average|Total cache space recycle rate in the HPC Cache. This is the rate at which existing data is cleared from the cache to make room for new data.|No Dimensions|
+|TotalUsedReadSpace|Yes|Used Read Space|Bytes|Average|Total read space used by dirty data for the HPC Cache.|No Dimensions|
+|TotalUsedWriteSpace|Yes|Used Write Space|Bytes|Average|Total write space used by dirty data for the HPC Cache.|No Dimensions|
|Uptime|Yes|Uptime|Count|Average|Boolean results of connectivity test between the Cache and monitoring system.|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|||||||| |ServerSyncSessionResult|Yes|Sync Session Result|Count|Average|Metric that logs a value of 1 each time the Server Endpoint successfully completes a Sync Session with the Cloud Endpoint|SyncGroupName, ServerEndpointName, SyncDirection| |StorageSyncBatchTransferredFileBytes|Yes|Bytes synced|Bytes|Total|Total file size transferred for Sync Sessions|SyncGroupName, ServerEndpointName, SyncDirection|
-|StorageSyncRecallComputedSuccessRate|Yes|Cloud tiering recall success rate|Percent|Average|Percentage of all recalls that were successful|SyncGroupName, ServerName|
+|StorageSyncComputedCacheHitRate|Yes|Cloud tiering cache hit rate|Percent|Average|Percentage of bytes that were served from the cache|SyncGroupName, ServerName, ServerEndpointName|
+|StorageSyncRecallComputedSuccessRate|Yes|Cloud tiering recall success rate|Percent|Average|Percentage of all recalls that were successful|SyncGroupName, ServerName, ServerEndpointName|
|StorageSyncRecalledNetworkBytesByApplication|Yes|Cloud tiering recall size by application|Bytes|Total|Size of data recalled by application|SyncGroupName, ServerName, ApplicationName|
-|StorageSyncRecalledTotalNetworkBytes|Yes|Cloud tiering recall size|Bytes|Total|Size of data recalled|SyncGroupName, ServerName|
-|StorageSyncRecallIOTotalSizeBytes|Yes|Cloud tiering recall|Bytes|Total|Total size of data recalled by the server|ServerName|
-|StorageSyncRecallThroughputBytesPerSecond|Yes|Cloud tiering recall throughput|BytesPerSecond|Average|Size of data recall throughput|SyncGroupName, ServerName|
+|StorageSyncRecalledTotalNetworkBytes|Yes|Cloud tiering recall size|Bytes|Total|Size of data recalled|SyncGroupName, ServerName, ServerEndpointName|
+|StorageSyncRecallThroughputBytesPerSecond|Yes|Cloud tiering recall throughput|BytesPerSecond|Average|Size of data recall throughput|SyncGroupName, ServerName, ServerEndpointName|
|StorageSyncServerHeartbeat|Yes|Server Online Status|Count|Maximum|Metric that logs a value of 1 each time the resigtered server successfully records a heartbeat with the Cloud Endpoint|ServerName| |StorageSyncSyncSessionAppliedFilesCount|Yes|Files Synced|Count|Total|Count of Files synced|SyncGroupName, ServerEndpointName, SyncDirection|
-|StorageSyncSyncSessionPerItemErrorsCount|Yes|Files not syncing|Count|Total|Count of files failed to sync|SyncGroupName, ServerEndpointName, SyncDirection|
--
-## microsoft.storagesync/storageSyncServices/registeredServers
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ServerHeartbeat|Yes|Server Online Status|Count|Maximum|Metric that logs a value of 1 each time the resigtered server successfully records a heartbeat with the Cloud Endpoint|ServerResourceId, ServerName|
-|ServerRecallIOTotalSizeBytes|Yes|Cloud tiering recall|Bytes|Total|Total size of data recalled by the server|ServerResourceId, ServerName|
--
-## microsoft.storagesync/storageSyncServices/syncGroups
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|SyncGroupBatchTransferredFileBytes|Yes|Bytes synced|Bytes|Total|Total file size transferred for Sync Sessions|SyncGroupName, ServerEndpointName, SyncDirection|
-|SyncGroupSyncSessionAppliedFilesCount|Yes|Files Synced|Count|Total|Count of Files synced|SyncGroupName, ServerEndpointName, SyncDirection|
-|SyncGroupSyncSessionPerItemErrorsCount|Yes|Files not syncing|Count|Total|Count of files failed to sync|SyncGroupName, ServerEndpointName, SyncDirection|
--
-## microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ServerEndpointBatchTransferredFileBytes|Yes|Bytes synced|Bytes|Total|Total file size transferred for Sync Sessions|ServerEndpointName, SyncDirection|
-|ServerEndpointSyncSessionAppliedFilesCount|Yes|Files Synced|Count|Total|Count of Files synced|ServerEndpointName, SyncDirection|
-|ServerEndpointSyncSessionPerItemErrorsCount|Yes|Files not syncing|Count|Total|Count of files failed to sync|ServerEndpointName, SyncDirection|
+|StorageSyncSyncSessionPerItemErrorsCount|Yes|Files not syncing|Count|Average|Count of files failed to sync|SyncGroupName, ServerEndpointName, SyncDirection|
+|StorageSyncTieringCacheSizeBytes|Yes|Server cache size|Bytes|Average|Size of data cached on the server|SyncGroupName, ServerName, ServerEndpointName|
## Microsoft.StreamAnalytics/streamingjobs |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AMLCalloutFailedRequests|Yes|Failed Function Requests|Count|Total|Failed Function Requests|LogicalName, PartitionId|
-|AMLCalloutInputEvents|Yes|Function Events|Count|Total|Function Events|LogicalName, PartitionId|
-|AMLCalloutRequests|Yes|Function Requests|Count|Total|Function Requests|LogicalName, PartitionId|
-|ConversionErrors|Yes|Data Conversion Errors|Count|Total|Data Conversion Errors|LogicalName, PartitionId|
-|DeserializationError|Yes|Input Deserialization Errors|Count|Total|Input Deserialization Errors|LogicalName, PartitionId|
-|DroppedOrAdjustedEvents|Yes|Out of order Events|Count|Total|Out of order Events|LogicalName, PartitionId|
-|EarlyInputEvents|Yes|Early Input Events|Count|Total|Early Input Events|LogicalName, PartitionId|
-|Errors|Yes|Runtime Errors|Count|Total|Runtime Errors|LogicalName, PartitionId|
-|InputEventBytes|Yes|Input Event Bytes|Bytes|Total|Input Event Bytes|LogicalName, PartitionId|
-|InputEvents|Yes|Input Events|Count|Total|Input Events|LogicalName, PartitionId|
-|InputEventsSourcesBacklogged|Yes|Backlogged Input Events|Count|Maximum|Backlogged Input Events|LogicalName, PartitionId|
-|InputEventsSourcesPerSecond|Yes|Input Sources Received|Count|Total|Input Sources Received|LogicalName, PartitionId|
-|LateInputEvents|Yes|Late Input Events|Count|Total|Late Input Events|LogicalName, PartitionId|
-|OutputEvents|Yes|Output Events|Count|Total|Output Events|LogicalName, PartitionId|
-|OutputWatermarkDelaySeconds|Yes|Watermark Delay|Seconds|Maximum|Watermark Delay|LogicalName, PartitionId|
-|ProcessCPUUsagePercentage|Yes|CPU % Utilization (Preview)|Percent|Maximum|CPU % Utilization (Preview)|LogicalName, PartitionId|
-|ResourceUtilization|Yes|SU % Utilization|Percent|Maximum|SU % Utilization|LogicalName, PartitionId|
+|AMLCalloutFailedRequests|Yes|Failed Function Requests|Count|Total|Failed Function Requests|LogicalName, PartitionId, ProcessorInstance|
+|AMLCalloutInputEvents|Yes|Function Events|Count|Total|Function Events|LogicalName, PartitionId, ProcessorInstance|
+|AMLCalloutRequests|Yes|Function Requests|Count|Total|Function Requests|LogicalName, PartitionId, ProcessorInstance|
+|ConversionErrors|Yes|Data Conversion Errors|Count|Total|Data Conversion Errors|LogicalName, PartitionId, ProcessorInstance|
+|DeserializationError|Yes|Input Deserialization Errors|Count|Total|Input Deserialization Errors|LogicalName, PartitionId, ProcessorInstance|
+|DroppedOrAdjustedEvents|Yes|Out of order Events|Count|Total|Out of order Events|LogicalName, PartitionId, ProcessorInstance|
+|EarlyInputEvents|Yes|Early Input Events|Count|Total|Early Input Events|LogicalName, PartitionId, ProcessorInstance|
+|Errors|Yes|Runtime Errors|Count|Total|Runtime Errors|LogicalName, PartitionId, ProcessorInstance|
+|InputEventBytes|Yes|Input Event Bytes|Bytes|Total|Input Event Bytes|LogicalName, PartitionId, ProcessorInstance|
+|InputEvents|Yes|Input Events|Count|Total|Input Events|LogicalName, PartitionId, ProcessorInstance|
+|InputEventsSourcesBacklogged|Yes|Backlogged Input Events|Count|Maximum|Backlogged Input Events|LogicalName, PartitionId, ProcessorInstance|
+|InputEventsSourcesPerSecond|Yes|Input Sources Received|Count|Total|Input Sources Received|LogicalName, PartitionId, ProcessorInstance|
+|LateInputEvents|Yes|Late Input Events|Count|Total|Late Input Events|LogicalName, PartitionId, ProcessorInstance|
+|OutputEvents|Yes|Output Events|Count|Total|Output Events|LogicalName, PartitionId, ProcessorInstance|
+|OutputWatermarkDelaySeconds|Yes|Watermark Delay|Seconds|Maximum|Watermark Delay|LogicalName, PartitionId, ProcessorInstance|
+|ProcessCPUUsagePercentage|Yes|CPU % Utilization (Preview)|Percent|Maximum|CPU % Utilization (Preview)|LogicalName, PartitionId, ProcessorInstance|
+|ResourceUtilization|Yes|SU % Utilization|Percent|Maximum|SU % Utilization|LogicalName, PartitionId, ProcessorInstance|
## Microsoft.Synapse/workspaces
For important additional information, see [Monitoring Agents Overview](../agents
|PercentageCpuReady|Yes|Percentage CPU Ready|Milliseconds|Total|Ready time is the time spend waiting for CPU(s) to become available in the past update interval.|No Dimensions|
+## Microsoft.Web/connections
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ApiConnectionRequests|Yes|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
++ ## Microsoft.Web/hostingEnvironments |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|TcpSynSent|Yes|TCP Syn Sent|Count|Average|The average number of sockets in SYN_SENT state across all the instances of the plan.|Instance| |TcpTimeWait|Yes|TCP Time Wait|Count|Average|The average number of sockets in TIME_WAIT state across all the instances of the plan.|Instance| + ## Microsoft.Web/sites+ |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket().|Instance|
For important additional information, see [Monitoring Agents Overview](../agents
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Not applicable to Azure Functions. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. Not applicable to Azure Functions. For more information about this metric,lease see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. Only present for Azure Functions.|Instance|
For important additional information, see [Monitoring Agents Overview](../agents
|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance| |TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance| + ## Microsoft.Web/sites/slots |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Not applicable to Azure Functions. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. Not applicable to Azure Functions. For more information about this metric, please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
For important additional information, see [Monitoring Agents Overview](../agents
|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance| |TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance| + ## Microsoft.Web/staticSites |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Azure Monitor Resource Logs supported services and categories description: Reference of Azure Monitor Understand the supported services and event schema for Azure resource logs. Previously updated : 05/26/2021 Last updated : 07/06/2021 # Supported categories for Azure Resource Logs
If you think there is something is missing, you can open a GitHub comment at the
|||| |Audit|Audit|Yes| |Operational|Operational|Yes|
+|Request|Request|Yes|
## Microsoft.Batch/batchAccounts
If you think there is something is missing, you can open a GitHub comment at the
|accounts|Databricks Accounts|No| |clusters|Databricks Clusters|No| |dbfs|Databricks File System|No|
+|featureStore|Databricks Feature Store|Yes|
+|genie|Databricks Genie|Yes|
+|globalInitScripts|Databricks Global Init Scripts|Yes|
+|iamRole|Databricks IAM Role|Yes|
|instancePools|Instance Pools|No| |jobs|Databricks Jobs|No|
+|mlflowAcledArtifact|Databricks MLFlow Acled Artifact|Yes|
+|mlflowExperiment|Databricks MLFlow Experiment|Yes|
|notebook|Databricks Notebook|No|
+|RemoteHistoryService|Databricks Remote History Service|Yes|
|secrets|Databricks Secrets|No|
+|sqlanalytics|Databricks SQL Analytics|Yes|
|sqlPermissions|Databricks SQLPermissions|No| |ssh|Databricks SSH|No| |workspace|Databricks Workspace|No| - ## Microsoft.DataCollaboration/workspaces |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Proposals|Proposals|No| |Scripts|Scripts|No| - ## Microsoft.DataFactory/factories |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|ResourceProviderOperation|ResourceProviderOperation|Yes|
-## Microsoft.DocumentDB/databaseAccounts
+## Microsoft.DocumentDB/DatabaseAccounts
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| |||| |AuditLogs|Audit logs|No|
+|DiagnosticLogs|Diagnostic logs|Yes|
## microsoft.insights/autoscalesettings
If you think there is something is missing, you can open a GitHub comment at the
|TableUsageStatistics|Table usage statistics|No|
-## Microsoft.Logic/integrationAccounts
+## Microsoft.Logic/IntegrationAccounts
|Category|Category Display Name|Costs To Export| |||| |IntegrationAccountTrackingEvents|Integration Account track events|No|
-## Microsoft.Logic/workflows
+## Microsoft.Logic/Workflows
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No| |AmlComputeJobEvent|AmlComputeJobEvent|No| |AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
+|ComputeInstanceEvent|ComputeInstanceEvent|Yes|
+|DataLabelChangeEvent|DataLabelChangeEvent|Yes|
+|DataLabelReadEvent|DataLabelReadEvent|Yes|
+|DataSetChangeEvent|DataSetChangeEvent|Yes|
+|DataSetReadEvent|DataSetReadEvent|Yes|
+|DataStoreChangeEvent|DataStoreChangeEvent|Yes|
+|DataStoreReadEvent|DataStoreReadEvent|Yes|
+|DeploymentEventACI|DeploymentEventACI|Yes|
+|DeploymentEventAKS|DeploymentEventAKS|Yes|
+|DeploymentReadEvent|DeploymentReadEvent|Yes|
+|EnvironmentChangeEvent|EnvironmentChangeEvent|Yes|
+|EnvironmentReadEvent|EnvironmentReadEvent|Yes|
+|InferencingOperationACI|InferencingOperationACI|Yes|
+|InferencingOperationAKS|InferencingOperationAKS|Yes|
+|ModelsActionEvent|ModelsActionEvent|Yes|
+|ModelsChangeEvent|ModelsChangeEvent|Yes|
+|ModelsReadEvent|ModelsReadEvent|Yes|
+|PipelineChangeEvent|PipelineChangeEvent|Yes|
+|PipelineReadEvent|PipelineReadEvent|Yes|
+|RunEvent|RunEvent|Yes|
+|RunReadEvent|RunReadEvent|Yes|
## Microsoft.Media/mediaservices
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| |||| |KeyDeliveryRequests|Key Delivery Requests|No|
+|MediaAccount|Media Account Health Status|Yes|
## Microsoft.Network/applicationgateways
If you think there is something is missing, you can open a GitHub comment at the
|AzureFirewallNetworkRule|Azure Firewall Network Rule|No|
-## Microsoft.Network/bastionHosts
+## microsoft.network/bastionHosts
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|NetworkSecurityGroupRuleCounter|Network Security Group Rule Counter|No|
-## Microsoft.Network/p2sVpnGateways
+## microsoft.network/p2svpngateways
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|ProbeHealthStatusEvents|Traffic Manager Probe Health Results Event|No|
-## Microsoft.Network/virtualNetworkGateways
+## microsoft.network/virtualnetworkgateways
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|VMProtectionAlerts|VM protection alerts|No|
-## Microsoft.Network/vpnGateways
+## microsoft.network/vpngateways
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| ||||
+|DataSensitivityLogEvent|DataSensitivity|Yes|
|ScanStatusLogEvent|ScanStatus|No|
If you think there is something is missing, you can open a GitHub comment at the
|ResourceUsageStats|Resource Usage Statistics|No| |SQLSecurityAuditEvents|SQL Security Audit Event|No| - ## Microsoft.Sql/managedInstances/databases |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Timeouts|Timeouts|No| |Waits|Waits|No| - ## Microsoft.Storage/storageAccounts/blobServices |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|BigDataPoolAppsEnded|Big Data Pool Applications Ended|No|
+## Microsoft.Synapse/workspaces/kustoPools
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command|Command|No|
+|FailedIngestion|Failed ingest operations|No|
+|IngestionBatching|Ingestion batching|No|
+|Query|Query|No|
+|SucceededIngestion|Successful ingest operations|No|
+|TableDetails|Table details|No|
+|TableUsageStatistics|Table usage statistics|No|
++ ## Microsoft.Synapse/workspaces/sqlPools |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Management|Management|No|
-## microsoft.web/hostingenvironments
+## Microsoft.Web/hostingEnvironments
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|AppServiceAppLogs|App Service Application Logs|No| |AppServiceAuditLogs|Access Audit Logs|No| |AppServiceConsoleLogs|App Service Console Logs|No|
+|AppServiceDiagnosticToolsLogs|Report Diagnostic Tools Logs|Yes|
|AppServiceFileAuditLogs|Site Content Change Audit Logs|No| |AppServiceHTTPLogs|HTTP logs|No| |AppServiceIPSecAuditLogs|IPSecurity Audit Logs|No|
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials
#### Azure Resource Manager Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to this resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
-Additionally, specific experiences (such as the LogicApp connector) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
+Additionally, specific experiences (such as the LogicApp connector, Update Management solution and the Workspace Summary blade in the portal, showing the solutions dashboard) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
## Review and validate your Private Link setup
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: "Azure Monitor docs: What's new for May, 2021"
-description: "What's new in the Azure Monitor docs for May, 2021."
+ Title: "Azure Monitor docs: What's new for June 2021"
+description: "What's new in the Azure Monitor docs for June 2021."
Previously updated : 06/03/2021 Last updated : 07/12/2021
-# Azure Monitor docs: What's new for May, 2021
+# Azure Monitor docs: What's new for June, 2021
-Welcome to what's new in the Azure Monitor docs from May, 2021. This article lists some of the major changes to docs during this period.
+This article lists the significant changes to AzureMonitor docs during the month of June.
-## General
+## Agents
-**Updated articles**
+### Updated articles
-- [Azure Monitor Frequently Asked Questions](faq.yml)-- [Azure Monitor partner integrations](partners.md)
+- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)
+- [Overview of Azure Monitor agents](agents/agents-overview.md)
+- [Configure data collection for the Azure Monitor agent (preview)](agents/data-collection-rule-azure-monitor-agent.md)
## Alerts
-**Updated articles**
+### New articles
+
+- [Migrate Azure Monitor Application Insights smart detection to alerts (Preview)](alerts/alerts-smart-detections-migration.md)
+
+### Updated articles
-- [Log alerts in Azure Monitor](alerts/alerts-unified-log.md)
+- [Create Metric Alerts for Logs in Azure Monitor](alerts/alerts-metric-logs.md)
+- [Troubleshoot log alerts in Azure Monitor](alerts/alerts-troubleshoot-log.md)
## Application Insights
-**New articles**
+### New articles
-- [Private testing](app/availability-private-test.md)
+- [Azure AD authentication for Application Insights (Preview)](app/azure-ad-authentication.md)
+- [Quickstart: Monitor an ASP.NET Core app with Azure Monitor Application Insights](app/dotnet-quickstart.md)
-**Updated articles**
+### Updated articles
+- [Work Item Integration](app/work-item-integration.md)
+- [Azure AD authentication for Application Insights (Preview)](app/azure-ad-authentication.md)
- [Release annotations for Application Insights](app/annotations.md)-- [Application Insights logging with .NET](app/ilogger.md)-- [Diagnose exceptions in web apps with Application Insights](app/asp-net-exceptions.md)-- [Application Monitoring for Azure App Service](app/azure-web-apps.md)-- [What is auto-instrumentation or codeless attach - Azure Monitor Application Insights?](app/codeless-overview.md)
+- [Connection strings](app/sdk-connection-string.md)
+- [Telemetry processors (preview) - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors.md)
+- [IP addresses used by Azure Monitor](app/ip-addresses.md)
- [Java codeless application monitoring Azure Monitor Application Insights](app/java-in-process-agent.md)-- [Upgrading from Application Insights Java 2.x SDK](app/java-standalone-upgrade-from-2x.md)-- [Quickstart: Get started with Application Insights in a Java web project](app/java-2x-get-started.md) - [Adding the JVM arg - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Create and run custom availability tests using Azure Functions](app/availability-azure-functions.md)-- [Set up Azure Monitor for your Python application](app/opencensus-python.md)
+- [Application Insights for ASP.NET Core applications](app/asp-net-core.md)
+- [Telemetry processor examples - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors-examples.md)
+- [Application security detection pack (preview)](app/proactive-application-security-detection-pack.md)
+- [Smart detection in Application Insights](app/proactive-diagnostics.md)
+- [Abnormal rise in exception volume (preview)](app/proactive-exception-volume.md)
+- [Smart detection - Performance Anomalies](app/proactive-performance-diagnostics.md)
+- [Memory leak detection (preview)](app/proactive-potential-memory-leak.md)
+- [Degradation in trace severity ratio (preview)](app/proactive-trace-severity.md)
## Containers
-**Updated articles**
+### Updated articles
-- [Configure agent data collection for Container insights](containers/container-insights-agent-config.md)
+- [How to query logs from Container insights](containers/container-insights-log-search.md)
## Essentials
-**Updated articles**
+### Updated articles
-- [Supported metrics with Azure Monitor](essentials/metrics-supported.md) - [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)
+- [Resource Manager template samples for diagnostic settings in Azure Monitor](essentials/resource-manager-diagnostic-settings.md)
+
+## General
+
+### New articles
+
+- [Azure Monitor Frequently Asked Questions](faq.yml)
+
+### Updated articles
+
+- [Deploy Azure Monitor at scale using Azure Policy](deploy-scale.md)
+- [Azure Monitor docs: What's new for May, 2021](whats-new.md)
## Insights
-**Updated articles**
+### Updated articles
-- [Monitoring your key vault service with Key Vault insights](insights/key-vault-insights-overview.md)-- [Monitoring your storage service with Azure Monitor Storage insights](insights/storage-insights-overview.md)
+- [Enable SQL insights (preview)](insights/sql-insights-enable.md)
## Logs
-**New articles**
+### Updated articles
+
+- [Log Analytics tutorial](logs/log-analytics-tutorial.md)
+- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)
+- [Use Azure Private Link to securely connect networks to Azure Monitor](logs/private-link-security.md)
+- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)
+- [Monitor health of Log Analytics workspace in Azure Monitor](logs/monitor-workspace.md)
+
+## Virtual Machines
+
+### New articles
-- [Log Analytics Workspace Insights (preview)](logs/log-analytics-workspace-insights-overview.md)-- [Using queries in Azure Monitor Log Analytics](logs/queries.md)-- [Query packs in Azure Monitor Logs (preview)](logs/query-packs.md)-- [Save a query in Azure Monitor Log Analytics (preview)](logs/save-query.md)
+- [Monitoring virtual machines with Azure Monitor - Alerts](vm/monitor-virtual-machine-alerts.md)
+- [Monitoring virtual machines with Azure Monitor - Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)
+- [Monitor virtual machines with Azure Monitor - Configure monitoring](vm/monitor-virtual-machine-configure.md)
+- [Monitor virtual machines with Azure Monitor - Security monitoring](vm/monitor-virtual-machine-security.md)
+- [Monitoring virtual machines with Azure Monitor - Workloads](vm/monitor-virtual-machine-workloads.md)
+- [Monitoring virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)
+- [VM insights Generally Available (GA) Frequently Asked Questions](vm/vminsights-ga-release-faq.yml)
-**Updated articles**
+### Updated articles
-- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)
+- [Troubleshoot VM insights guest health (preview)](vm/vminsights-health-troubleshoot.md)
+- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/learn-bicep.md
Title: Discover Bicep in Microsoft Learn
-description: Provides an overview of the units that are available in Microsoft Learn for Bicep.
+ Title: Discover Bicep on Microsoft Learn
+description: Provides an overview of the units that are available on Microsoft Learn for Bicep.
Last updated 06/28/2021
-# Bicep in Microsoft Learn
+# Bicep on Microsoft Learn
For step-by-step guidance on using Bicep to deploy your infrastructure to Azure, Microsoft Learn offers several learning modules.
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/linter.md
You can use several values for rule level:
| `Info` | Violations do not appear in command-line build output. | Offending code is underlined with a blue squiggle and appears in Problems tab. | | `Off` | Suppressed completely. | Suppressed completely. |
-The current set of linter rules is minimal and taken from [arm-ttk test cases](../templates/test-cases.md). Both Visual Studio Code extension and Bicep CLI check for all available rules by default and all rules are set at warning level. Based on the level of a rule, you see errors or warnings or informational messages within the editor.
+The current set of linter rules is minimal and taken from [arm-ttk test cases](../templates/template-test-cases.md). Both Visual Studio Code extension and Bicep CLI check for all available rules by default and all rules are set at warning level. Based on the level of a rule, you see errors or warnings or informational messages within the editor.
- [no-hardcoded-env-urls](https://github.com/Azure/bicep/blob/main/docs/linter-rules/no-hardcoded-env-urls.md) - [no-unused-params](https://github.com/Azure/bicep/blob/main/docs/linter-rules/no-unused-params.md)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 07/06/2021 Last updated : 07/12/2021 # Azure subscription and service limits, quotas, and constraints
azure-resource-manager Createuidefinition Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/createuidefinition-test-cases.md
This article describes the tests that are run with the [template test toolkit](test-toolkit.md) for [createUiDefinition.json](../managed-applications/create-uidefinition-overview.md) files. The examples include the test names and code samples that **pass** or **fail** the tests.
-The toolkit includes [test cases](test-cases.md) for Azure Resource Manager templates (ARM templates) and the main template files named _azuredeploy.json_ or _maintemplate.json_. When the directory contains a _createUiDefinition.json_ file, specific tests are run for UI controls. For more information about how to run tests, see [Test parameters](test-toolkit.md#test-parameters).
+The toolkit includes [test cases](template-test-cases.md) for Azure Resource Manager templates (ARM templates) and the main template files named _azuredeploy.json_ or _maintemplate.json_. When the directory contains a _createUiDefinition.json_ file, specific tests are run for UI controls. For more information about how to run tests, see [Test parameters](test-toolkit.md#test-parameters).
The _createUiDefinition.json_ file creates custom user-interface (UI) controls using [elements](../managed-applications/create-uidefinition-elements.md) and [functions](../managed-applications/create-uidefinition-functions.md).
The _createUiDefinition.json_ file for this example:
- To create an Azure portal user interface, see [CreateUiDefinition.json for Azure managed application's create experience](../managed-applications/create-uidefinition-overview.md). - To use the Create UI Definition Sandbox, see [Test your portal interface for Azure Managed Applications](../managed-applications/test-createuidefinition.md). - For more information about UI controls, see [CreateUiDefinition elements](../managed-applications/create-uidefinition-elements.md) and [CreateUiDefinition functions](../managed-applications/create-uidefinition-functions.md).-- To learn more about ARM template tests, see [Default test cases for ARM template test toolkit](test-cases.md).
+- To learn more about ARM template tests, see [Default test cases for ARM template test toolkit](template-test-cases.md).
azure-resource-manager Template Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-test-cases.md
+
+ Title: Template test cases for test toolkit
+description: Describes the template tests that are run by the Azure Resource Manager template test toolkit.
+ Last updated : 07/12/2021++++
+# Default test cases for ARM template test toolkit
+
+This article describes the default tests that are run with the [template test toolkit](test-toolkit.md) for Azure Resource Manager templates (ARM templates). It provides examples that pass or fail the test and includes the name of each test. To run a specific test, see [Test parameters](test-toolkit.md#test-parameters).
+
+## Use correct schema
+
+Test name: **DeploymentTemplate Schema Is Correct**
+
+In your template, you must specify a valid schema value.
+
+This example **fails** because the schema is invalid:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-01-01/deploymentTemplate.json#",
+}
+```
+
+This example displays a **warning** because schema version `2015-01-01` is deprecated and isn't maintained.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+}
+```
+
+The following example **passes** using a valid schema.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+}
+```
+
+The template's `schema` property must be set to one of the following schemas:
+
+* `https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#`
+* `https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#`
+* `https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#`
+* `https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#`
+* `https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json`
+
+## Declared parameters must be used
+
+Test name: **Parameters Must Be Referenced**
+
+This test finds parameters that aren't used in the template or parameters that aren't used in a valid expression.
+
+To reduce confusion in your template, delete any parameters that are defined but not used. Eliminating unused parameters simplifies template deployments because you don't have to provide unnecessary values.
+
+This example **fails** because the expression that references a parameter is missing the leading square bracket (`[`).
+
+```json
+"resources": [
+ {
+ "location": " parameters('location')]"
+ }
+]
+```
+
+This example **passes** because the expression is valid:
+
+```json
+"resources": [
+ {
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+## Secure parameters can't have hard-coded default
+
+Test name: **Secure String Parameters Cannot Have Default**
+
+Don't provide a hard-coded default value for a secure parameter in your template. A secure parameter can have an empty string as a default value or use the [newGuid](template-functions-string.md#newguid) function in an expression.
+
+You use the types `secureString` or `secureObject` on parameters that contain sensitive values, like passwords. When a parameter uses a secure type, the value of the parameter isn't logged or stored in the deployment history. This action prevents a malicious user from discovering the sensitive value.
+
+When you provide a default value, that value is discoverable by anyone who can access the template or the deployment history.
+
+The following example **fails** this test:
+
+```json
+"parameters": {
+ "adminPassword": {
+ "defaultValue": "HardcodedPassword",
+ "type": "secureString"
+ }
+}
+```
+
+The next example **passes** this test:
+
+```json
+"parameters": {
+ "adminPassword": {
+ "type": "secureString"
+ }
+}
+```
+
+This example **passes** because the `newGuid` function is used:
+
+```json
+"parameters": {
+ "secureParameter": {
+ "type": "secureString",
+ "defaultValue": "[newGuid()]"
+ }
+}
+```
+
+## Environment URLs can't be hard-coded
+
+Test name: **DeploymentTemplate Must Not Contain Hardcoded Uri**
+
+Don't hard-code environment URLs in your template. Instead, use the [environment](template-functions-deployment.md#environment) function to dynamically get these URLs during deployment. For a list of the URL hosts that are blocked, see the [test case](https://github.com/Azure/arm-ttk/blob/master/arm-ttk/testcases/deploymentTemplate/DeploymentTemplate-Must-Not-Contain-Hardcoded-Uri.test.ps1).
+
+The following example **fails** this test because the URL is hard-coded.
+
+```json
+"variables":{
+ "AzureURL":"https://management.azure.com"
+}
+```
+
+The test also **fails** when used with [concat](template-functions-string.md#concat) or [uri](template-functions-string.md#uri).
+
+```json
+"variables":{
+ "AzureSchemaURL1": "[concat('https://','gallery.azure.com')]",
+ "AzureSchemaURL2": "[uri('gallery.azure.com','test')]"
+}
+```
+
+The following example **passes** this test.
+
+```json
+"variables": {
+ "AzureSchemaURL": "[environment().gallery]"
+}
+```
+
+## Location uses parameter
+
+Test name: **Location Should Not Be Hardcoded**
+
+To set a resource's location, your templates should have a parameter named `location` with the type set to `string`. In the main template, _azuredeploy.json_ or _mainTemplate.json_, this parameter can default to the resource group location. In linked or nested templates, the location parameter shouldn't have a default location.
+
+Template users may have limited access to regions where they can create resources. A hard-coded resource location might block users from creating a resource. The `"[resourceGroup().location]"` expression could block users if the resource group was created in a region the user can't access. Users who are blocked are unable to use the template.
+
+By providing a `location` parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
+
+The following example **fails** because the resource's `location` is set to `resourceGroup().location`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "[resourceGroup().location]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ }
+ }
+ ]
+}
+```
+
+The next example uses a `location` parameter but **fails** because the parameter defaults to a hard-coded location.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "westus"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+
+The following example **passes** when the template is used as the main template. Create a parameter that defaults to the resource group location but allows users to provide a different value.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for the resources."
+ }
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+
+> [!NOTE]
+> If the preceding example is used as a linked template, the test **fails**. When used as a linked template, remove the default value.
+
+## Resources should have location
+
+Test name: **Resources Should Have Location**
+
+The location for a resource should be set to a [template expression](template-expressions.md) or `global`. The template expression would typically use the `location` parameter described in [Location uses parameter](#location-uses-parameter).
+
+The following example **fails** this test because the `location` isn't an expression or `global`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "functions": [],
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "westus",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+
+The following example **passes** because the resource `location` is set to `global`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "functions": [],
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "global",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ }
+ }
+ ],
+ "outputs": {}
+}
+
+```
+
+The next example also **passes** because the `location` parameter uses an expression. The resource `location` uses the expression's value.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for the resources."
+ }
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+
+## VM size uses parameter
+
+Test name: **VM Size Should Be A Parameter**
+
+Don't hard-code the `hardwareProfile` object's `vmSize`. The test fails when the `hardwareProfile` is omitted or contains a hard-coded value. Provide a parameter so users of your template can modify the size of the deployed virtual machine. For more information, see [Microsoft.Compute virtualMachines](/azure/templates/microsoft.compute/virtualmachines).
+
+The following example **fails** because the `hardwareProfile` object's `vmSize` is a hard-coded value.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-12-01",
+ "name": "demoVM",
+ "location": "[parameters('location')]",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D2_v3"
+ }
+ }
+ }
+]
+```
+
+The example **passes** when a parameter specifies a value for `vmSize`:
+
+```json
+"parameters": {
+ "vmSizeParameter": {
+ "type": "string",
+ "defaultValue": "Standard_D2_v3",
+ "metadata": {
+ "description": "Size for the virtual machine."
+ }
+ }
+}
+```
+
+Then, `hardwareProfile` uses an expression for `vmSize` to reference the parameter's value:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-12-01",
+ "name": "demoVM",
+ "location": "[parameters('location')]",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "[parameters('vmSizeParameter')]"
+ }
+ }
+ }
+]
+```
+
+## Min and max values are numbers
+
+Test name: **Min And Max Value Are Numbers**
+
+When you define a parameter with `minValue` and `maxValue`, specify them as numbers. You must use `minValue` and `maxValue` as a pair or the test fails.
+
+The following example **fails** because `minValue` and `maxValue` are strings:
+
+```json
+"exampleParameter": {
+ "type": "int",
+ "minValue": "0",
+ "maxValue": "10"
+}
+```
+
+The following example **fails** because only `minValue` is used:
+
+```json
+"exampleParameter": {
+ "type": "int",
+ "minValue": 0
+}
+```
+
+The following example **passes** because `minValue` and `maxValue` are numbers:
+
+```json
+"exampleParameter": {
+ "type": "int",
+ "minValue": 0,
+ "maxValue": 10
+}
+```
+
+## Artifacts parameter defined correctly
+
+Test name: **artifacts parameter**
+
+When you include parameters for `_artifactsLocation` and `_artifactsLocationSasToken`, use the correct defaults and types. The following conditions must be met to pass this test:
+
+* If you provide one parameter, you must provide the other.
+* `_artifactsLocation` must be a `string`.
+* `_artifactsLocation` must have a default value in the main template.
+* `_artifactsLocation` can't have a default value in a nested template.
+* `_artifactsLocation` must have either `"[deployment().properties.templateLink.uri]"` or the raw repo URL for its default value.
+* `_artifactsLocationSasToken` must be a `secureString`.
+* `_artifactsLocationSasToken` can only have an empty string for its default value.
+* `_artifactsLocationSasToken` can't have a default value in a nested template.
+
+## Declared variables must be used
+
+Test name: **Variables Must Be Referenced**
+
+This test finds variables that aren't used in the template or aren't used in a valid expression. To reduce confusion in your template, delete any variables that are defined but not used.
+
+This example **fails** because the expression that references a variable is missing the leading square bracket (`[`).
+
+```json
+"outputs": {
+ "outputVariable": {
+ "type": "string",
+ "value": " variables('varExample')]"
+ }
+}
+```
+
+This example **passes** because the expression is valid:
+
+```json
+"outputs": {
+ "outputVariable": {
+ "type": "string",
+ "value": "[variables('varExample')]"
+ }
+}
+```
+
+## Dynamic variable should not use concat
+
+Test name: **Dynamic Variable References Should Not Use Concat**
+
+Sometimes you need to dynamically construct a variable based on the value of another variable or parameter. Don't use the [concat](template-functions-string.md#concat) function when setting the value. Instead, use an object that includes the available options and dynamically get one of the properties from the object during deployment.
+
+The following example **passes** this test. The `currentImage` variable is dynamically set during deployment.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "osType": {
+ "type": "string",
+ "allowedValues": [
+ "Windows",
+ "Linux"
+ ]
+ }
+ },
+ "variables": {
+ "imageOS": {
+ "Windows": {
+ "image": "Windows Image"
+ },
+ "Linux": {
+ "image": "Linux Image"
+ }
+ },
+ "currentImage": "[variables('imageOS')[parameters('osType')].image]"
+ },
+ "resources": [],
+ "outputs": {
+ "result": {
+ "type": "string",
+ "value": "[variables('currentImage')]"
+ }
+ }
+}
+```
+
+## Use recent API version
+
+Test name: **apiVersions Should Be Recent**
+
+The API version for each resource should use a recent version that's hard-coded as a string. The test evaluates the version you use against the versions available for that resource type. An API version that's less than two years old from the date the test was run is considered recent. Don't use a preview version when a more recent version is available.
+
+The following example **fails** because the API version is more than two years old:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+The following example **fails** because a preview version is used when a newer version is available:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2020-08-01-preview",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+The following example **passes** because it's a recent version that's not a preview version:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+## Use hard-coded API version
+
+Test name: **Providers apiVersions Is Not Permitted**
+
+The API version for a resource type determines which properties are available. Provide a hard-coded API version in your template. Don't retrieve an API version that's determined during deployment because you won't know which properties are available.
+
+The following example **fails** this test.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "[providers('Microsoft.Compute', 'virtualMachines').apiVersions[0]]",
+ ...
+ }
+]
+```
+
+The following example **passes** this test.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-12-01",
+ ...
+ }
+]
+```
+
+## Properties can't be empty
+
+Test name: **Template Should Not Contain Blanks**
+
+Don't hard-code properties to an empty value. Empty values include null and empty strings, objects, or arrays. If a property is set to an empty value, remove that property from your template. You can set a property to an empty value during deployment, such as through a parameter.
+
+The following example **fails** because there are empty properties:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-01-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "sku": {},
+ "kind": ""
+ }
+]
+```
+
+The following example **passes**:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-01-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS",
+ "tier": "Standard"
+ },
+ "kind": "Storage"
+ }
+]
+```
+
+## Use Resource ID functions
+
+Test name: **IDs Should Be Derived From ResourceIDs**
+
+When specifying a resource ID, use one of the resource ID functions. The allowed functions are:
+
+* [resourceId](template-functions-resource.md#resourceid)
+* [subscriptionResourceId](template-functions-resource.md#subscriptionresourceid)
+* [tenantResourceId](template-functions-resource.md#tenantresourceid)
+* [extensionResourceId](template-functions-resource.md#extensionresourceid)
+
+Don't use the concat function to create a resource ID. The following example **fails** this test.
+
+```json
+"networkSecurityGroup": {
+ "id": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/networkSecurityGroups/', variables('networkSecurityGroupName'))]"
+}
+```
+
+The next example **passes** this test.
+
+```json
+"networkSecurityGroup": {
+ "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
+}
+```
+
+## ResourceId function has correct parameters
+
+Test name: **ResourceIds should not contain**
+
+When generating resource IDs, don't use unnecessary functions for optional parameters. By default, the [resourceId](template-functions-resource.md#resourceid) function uses the current subscription and resource group. You don't need to provide those values.
+
+The following example **fails** this test, because you don't need to provide the current subscription ID and resource group name.
+
+```json
+"networkSecurityGroup": {
+ "id": "[resourceId(subscription().subscriptionId, resourceGroup().name, 'Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
+}
+```
+
+The next example **passes** this test.
+
+```json
+"networkSecurityGroup": {
+ "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
+}
+```
+
+This test applies to:
+
+* [resourceId](template-functions-resource.md#resourceid)
+* [subscriptionResourceId](template-functions-resource.md#subscriptionresourceid)
+* [tenantResourceId](template-functions-resource.md#tenantresourceid)
+* [extensionResourceId](template-functions-resource.md#extensionresourceid)
+* [reference](template-functions-resource.md#reference)
+* [list*](template-functions-resource.md#list)
+
+For `reference` and `list*`, the test **fails** when you use `concat` to construct the resource ID.
+
+## dependsOn best practices
+
+Test name: **DependsOn Best Practices**
+
+When setting the deployment dependencies, don't use the [if](template-functions-logical.md#if) function to test a condition. If one resource depends on a resource that's [conditionally deployed](conditional-resource-deployment.md), set the dependency as you would with any resource. When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
+
+The `dependsOn` element can't begin with a [concat](template-functions-array.md#concat) function.
+
+The following example **fails** because it contains an `if` function:
+
+```json
+"dependsOn": [
+ "[if(equals(parameters('newOrExisting'),'new'), variables('storageAccountName'), '')]"
+]
+```
+
+This example **fails** because it begins with `concat`:
+
+```json
+"dependsOn": [
+ "[concat(variables('storageAccountName'))]"
+]
+```
+
+The following example **passes**:
+
+```json
+"dependsOn": [
+ "[variables('storageAccountName')]"
+]
+```
+
+## Nested or linked deployments can't use debug
+
+Test name: **Deployment Resources Must Not Be Debug**
+
+When you define a [nested or linked template](linked-templates.md) with the `Microsoft.Resources/deployments` resource type, you can enable [debugging](/azure/templates/microsoft.resources/deployments#debugsetting-object). Debugging is used when you need to test a template but can expose sensitive information. Before the template is used in production, turn off debugging. You can remove the `debugSetting` object or change the `detailLevel` property to `none`.
+
+The following example **fails** this test:
+
+```json
+"debugSetting": {
+ "detailLevel": "requestContent"
+}
+```
+
+The following example **passes** this test:
+
+```json
+"debugSetting": {
+ "detailLevel": "none"
+}
+```
+
+## Admin user names can't be literal value
+
+Test name: **adminUsername Should Not Be A Literal**
+
+When setting an `adminUserName`, don't use a literal value. Create a parameter for the user name and use an expression to reference the parameter's value.
+
+The following example **fails** with a literal value:
+
+```json
+"osProfile": {
+ "adminUserName": "myAdmin"
+}
+```
+
+The following example **passes** with an expression:
+
+```json
+"osProfile": {
+ "adminUsername": "[parameters('adminUsername')]"
+}
+```
+
+## Use latest VM image
+
+Test name: **VM Images Should Use Latest Version**
+
+This test is disabled, but the output shows that it passed. The best practice is to check your template for the following criteria:
+
+If your template includes a virtual machine with an image, make sure it's using the latest version of the image.
+
+## Use stable VM images
+
+Test name: **Virtual Machines Should Not Be Preview**
+
+Virtual machines shouldn't use preview images.
+
+The following example **fails** this test.
+
+```json
+"imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "16.04-LTS",
+ "version": "latest-preview"
+}
+```
+
+The following example **passes** this test.
+
+```json
+"imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "16.04-LTS",
+ "version": "latest"
+}
+```
+
+## Don't use ManagedIdentity extension
+
+Test name: **ManagedIdentityExtension must not be used**
+
+Don't apply the `ManagedIdentity` extension to a virtual machine. The extension was deprecated in 2019 and should no longer be used.
+
+## Outputs can't include secrets
+
+Test name: **Outputs Must Not Contain Secrets**
+
+Don't include any values in the `outputs` section that potentially exposes secrets. For example, secure parameters of type `secureString` or `secureObject`, or [list*](template-functions-resource.md#list) functions such as `listKeys`.
+
+The output from a template is stored in the deployment history, so a malicious user could find that information.
+
+The following example **fails** the test because it includes a secure parameter in an output value.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "secureParam": {
+ "type": "secureString"
+ }
+ },
+ "functions": [],
+ "variables": {},
+ "resources": [],
+ "outputs": {
+ "badResult": {
+ "type": "string",
+ "value": "[concat('this is the value ', parameters('secureParam'))]"
+ }
+ }
+}
+```
+
+The following example **fails** because it uses a [list*](template-functions-resource.md#list) function in the outputs.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageName": {
+ "type": "string"
+ }
+ },
+ "functions": [],
+ "variables": {},
+ "resources": [],
+ "outputs": {
+ "badResult": {
+ "type": "object",
+ "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2021-02-01')]"
+ }
+ }
+}
+```
+
+## Use protectedSettings for commandToExecute secrets
+
+Test name: **CommandToExecute Must Use ProtectedSettings For Secrets**
+
+For resources with type `CustomScript`, use the encrypted `protectedSettings` when `commandToExecute` includes secret data such as a password. For example, secret data can be used in secure parameters of type `secureString` or `secureObject`, [list*](template-functions-resource.md#list) functions such as `listKeys`, or custom scripts.
+
+Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Windows](
+/azure/virtual-machines/extensions/custom-script-windows), or [Linux](../../virtual-machines/extensions/custom-script-linux.md).
+
+This example **fails** because `settings` uses `commandToExecute` with a secure parameter:
+
+```json
+"parameters": {
+ "adminPassword": {
+ "type": "secureString"
+ }
+}
+...
+"properties": {
+ "type": "CustomScript",
+ "settings": {
+ "commandToExecute": "[parameters('adminPassword')]"
+ }
+}
+```
+
+This example **fails** because `settings` uses `commandToExecute` with a `listKeys` function:
+
+```json
+"properties": {
+ "type": "CustomScript",
+ "settings": {
+ "commandToExecute": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2021-02-01')]"
+ }
+}
+```
+
+This example **passes** because `protectedSettings` uses `commandToExecute` with a secure parameter:
+
+```json
+"parameters": {
+ "adminPassword": {
+ "type": "secureString"
+ }
+}
+...
+"properties": {
+ "type": "CustomScript",
+ "protectedSettings": {
+ "commandToExecute": "[parameters('adminPassword')]"
+ }
+}
+```
+
+This example **passes** because `protectedSettings` uses `commandToExecute` with a `listKeys` function:
+
+```json
+"properties": {
+ "type": "CustomScript",
+ "protectedSettings": {
+ "commandToExecute": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2021-02-01')]"
+ }
+}
+```
+
+## Use recent API versions in reference functions
+
+Test name: **apiVersions Should Be Recent In Reference Functions**
+
+Ensures the `apiVersions` used in [reference](template-functions-resource.md#reference) functions are recent and aren't preview versions. The test evaluates API versions against the resource providers available versions. An API version that's less than two years old from the date the test was run is considered recent.
+
+This example **fails** because the API version is more than two years old:
+
+```json
+"outputs": {
+ "stgAcct": {
+ "type": "string",
+ "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-06-01')]"
+ }
+}
+```
+
+This example **fails** because the API version is a preview version:
+
+```json
+"outputs": {
+ "stgAcct": {
+ "type": "string",
+ "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2020-08-01-preview')]"
+ }
+}
+```
+
+This example **passes** because the API version less than two years old and isn't a preview version:
+
+```json
+"outputs": {
+ "stgAcct": {
+ "type": "string",
+ "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-02-01')]"
+ }
+}
+```
+
+## Use type and name in resourceId functions
+
+Test name: **Resources Should Not Be Ambiguous**
+
+This test is disabled, but the output shows that it passed. The best practice is to check your template for the following criteria:
+
+A [resourceId](template-functions-resource.md#resourceid) must include a resource type and resource name. This test finds all the template's `resourceId` functions and verifies that the resource is used in the template with the correct syntax. Otherwise the function is considered ambiguous.
+
+For example, a `resourceId` function is considered ambiguous:
+
+* When a resource isn't found in the template and a resource group isn't specified.
+* If a resource includes a condition and a resource group isn't specified.
+* If a related resource contains some but not all of the name segments. For example, a child resource contains more than one name segment. For more information, see [resourceId remarks](template-functions-resource.md#remarks-3).
+
+## Use inner scope for nested deployment secure parameters
+
+Test name: **Secure Params In Nested Deployments**
+
+Use the nested template's `expressionEvaluationOptions` object with `inner` scope to evaluate expressions that contain secure parameters of type `secureString` or `secureObject` or [list*](template-functions-resource.md#list) functions such as `listKeys`. If the `outer` scope is used, expressions are evaluated in clear text within the parent template's scope. The secure value is then visible to anyone with access to the deployment history. The default value of `expressionEvaluationOptions` is `outer`.
+
+For more information about nested templates, see [Microsoft.Resources/deployments](/azure/templates/microsoft.resources/deployments) and [Expression evaluation scope in nested templates](linked-templates.md#expression-evaluation-scope-in-nested-templates).
+
+This example **fails** because `expressionEvaluationOptions` uses `outer` scope to evaluate secure parameters or `list*` functions:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "nestedTemplate",
+ "properties": {
+ "expressionEvaluationOptions": {
+ "scope": "outer"
+ }
+ }
+ }
+]
+```
+
+This example **passes** because `expressionEvaluationOptions` uses `inner` scope to evaluate secure parameters or `list*` functions:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "nestedTemplate",
+ "properties": {
+ "expressionEvaluationOptions": {
+ "scope": "inner"
+ }
+ }
+ }
+]
+```
+
+## Next steps
+
+* To learn about running the test toolkit, see [Use ARM template test toolkit](test-toolkit.md).
+* For a Microsoft Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
azure-resource-manager Template Tutorial Use Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-use-conditions.md
To complete this article, you need:
## Open a Quickstart template
-Azure Quickstart Templates is a repository for ARM templates. Instead of creating a template from scratch, you can find a sample template and customize it. The template used in this tutorial is called [Deploy a simple Windows VM](https://azure.microsoft.com/resources/templates/101-vm-simple-windows/).
+Azure Quickstart Templates is a repository for ARM templates. Instead of creating a template from scratch, you can find a sample template and customize it. The template used in this tutorial is called [Deploy a simple Windows VM](https://azure.microsoft.com/resources/templates/vm-simple-windows/).
1. From Visual Studio Code, select **File** > **Open File**. 1. In **File name**, paste the following URL:
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/test-toolkit.md
The [Azure Resource Manager template (ARM template) test toolkit](https://aka.ms/arm-ttk) checks whether your template uses recommended practices. When your template isn't compliant with recommended practices, it returns a list of warnings with the suggested changes. By using the test toolkit, you can learn how to avoid common problems in template development.
-The test toolkit provides a [set of default tests](test-cases.md). These tests are recommendations but not requirements. You can decide which tests are relevant to your goals and customize which tests are run.
+The test toolkit provides a [set of default tests](template-test-cases.md). These tests are recommendations but not requirements. You can decide which tests are relevant to your goals and customize which tests are run.
-This article describes how to run the test toolkit and how to add or remove tests. For descriptions of the default tests, see [toolkit test cases](test-cases.md).
+This article describes how to run the test toolkit and how to add or remove tests. For descriptions of the default tests, see [toolkit test cases](template-test-cases.md).
The toolkit is a set of PowerShell scripts that can be run from a command in PowerShell or CLI.
To test one file in that folder, add the `-File` parameter. However, the folder
Test-AzTemplate -TemplatePath $TemplateFolder -File cdn.json ```
-By default, all tests are run. To specify individual tests to run, use the `-Test` parameter. Provide the name of the test. For the names, see [Test cases for toolkit](test-cases.md).
+By default, all tests are run. To specify individual tests to run, use the `-Test` parameter. Provide the name of the test. For the names, see [Test cases for toolkit](template-test-cases.md).
```powershell Test-AzTemplate -TemplatePath $TemplateFolder -Test "Resources Should Have Location"
The next example shows how to run the tests.
## Next steps
-* To learn about the default tests, see [Default test cases for ARM template test toolkit](test-cases.md).
+* To learn about the default tests, see [Default test cases for ARM template test toolkit](template-test-cases.md).
* For a Microsoft Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
azure-video-analyzer Develop Deploy Grpc Inference Srv https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/develop-deploy-grpc-inference-srv.md
Perform the necessary steps to have Video Analyzer module deployed and working o
1. Choose one of the many languages that are supported by gRPC: C#, C++, Dart, Go, Java, Node, Objective-C, PHP, Python, Ruby. 1. Implement a gRPC server that will communicate with Video Analyzer using [the proto3 files](https://github.com/Azure/video-analyzer/tree/main/contracts/grpc).
- :::image type="content" source="./media/develop-deploy-grpc-inference-srv/inference-srv-container-process.png" alt-text="gRPC server that will communicate with Video Analyzer using the proto3 files":::
+ :::image type="content" source="./media/develop-deploy-grpc-inference-srv/inference-srv-container-process.svg" alt-text="gRPC server that will communicate with Video Analyzer using the proto3 files":::
Within this service: 1. Handle session description message exchange between the server and the client.
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
+
+ Title: Attach disk pools to Azure VMware Solution hosts (Preview)
+description: Learn how to attach a disk pool surfaced through an iSCSI target as the VMware datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and attach them to your VMware instance.
+ Last updated : 07/13/2021+
+#Customer intent: As an Azure service administrator, I want to scale my AVS hosts using disk pools instead of scaling clusters. So that I can use block storage for active working sets and tier less frequently accessed data from vSAN to disks. I can also replicate data from on-premises or primary VMware environment to disk storage for the secondary site.
+++
+# Attach disk pools to Azure VMware Solution hosts (Preview)
+
+[Azure disk pools](../virtual-machines/disks-pools.md) offer persistent block storage to applications and workloads backed by Azure Disks. You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance. For example, you can scale up by using disk pools instead of scaling clusters if you host storage-intensive workloads. You can also use disks to replicate data from on-premises or primary VMware environments to disk storage for the secondary site. To scale storage independent of the Azure VMware Solution hosts, we support surfacing [ultra disks](../virtual-machines/disks-types.md#ultra-disk) and [premium SSD](../virtual-machines/disks-types.md#premium-ssd) as the datastores.
+
+>[!IMPORTANT]
+>Azure disk pools on Azure VMware Solution (Preview) is currently in public preview.
+>This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+>For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure managed disks are attached to one iSCSI controller virtual machine deployed under the Azure VMware Solution resource group. Disks get deployed as storage targets to a disk pool, and each storage target shows as an iSCSI LUN under the iSCSI target. You can expose a disk pool as an iSCSI target connected to Azure VMware Solution hosts as a datastore. A disk pool surfaces as a single endpoint for all underlying disks added as storage targets. Each disk pool can have only one iSCSI controller.
+
+The diagram shows how disk pools work with Azure VMware Solution hosts. Each iSCSI controller accesses managed disk using a standard Azure protocol, and the Azure VMware Solution hosts can access the iSCSI controller over iSCSI.
++++
+## Supported regions
+
+You can only connect the disk pool to an Azure VMware Solution private cloud in the same region. For a list of supported regions, see [Regional availability](/azure/virtual-machines/disks-pools#regional-availability). If your private cloud is deployed in a non-supported region, you can redeploy it in a supported region. Azure VMware Solution private cloud and disk pool colocation provide the best performance with minimal network latency.
++
+## Prerequisites
+
+- Scalability and performance requirements of your workloads are identified. For details, see [Planning for Azure disk pools](../virtual-machines/disks-pools-planning.md).
+
+- [Azure VMware Solution private cloud](deploy-azure-vmware-solution.md) deployed with a [virtual network configured](deploy-azure-vmware-solution.md#step-3-connect-to-azure-virtual-network-with-expressroute). For more information, see [Network planning checklist](tutorial-network-checklist.md) and [Configure networking for your VMware private cloud](tutorial-configure-networking.md).
+
+ - If you select ultra disks, use Ultra Performance for the Azure VMware Solution private cloud and then [enable ExpressRoute FastPath](/azure/expressroute/expressroute-howto-linkvnet-arm#configure-expressroute-fastpath).
+
+ - If you select premium SSDs, use Standard (1 Gbps) for the Azure VMware Solution private cloud.
+
+- Disk pool as the backing storage deployed and exposed as an iSCSI target with each disk as an individual LUN. For details, see [Deploy an Azure disk pool](../virtual-machines/disks-pools-deploy.md).
+
+ >[!IMPORTANT]
+ > The disk pool must be deployed in the same subscription as the VMware cluster, and it must be attached to the same VNET as the VMware cluster.
+
+## Attach a disk pool to your private cloud
+You'll attach to a disk pool surfaced through an iSCSI target as the VMware datastore of an Azure VMware Solution private cloud.
+
+>[!IMPORTANT]
+>While in **Public Preview**, only attach a disk pool to a test or non-production cluster.
+
+1. Check if the subscription is registered to `Microsoft.AVS`:
+
+ ```azurecli
+ az provider show -n "Microsoft.AVS" --query registrationState
+ ```
+
+ If it's not already registered, then register it:
+
+ ```azurecli
+ az provider register -n "Microsoft.AVS"
+ ```
+
+1. Check if the subscription is registered to `CloudSanExperience` AFEC in Microsoft.AVS:
+
+ ```azurecli
+ az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"
+ ```
+
+ - If it's not already registered, then register it:
+
+ ```azurecli
+ az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"
+ ```
+
+ The registration may take approximately 15 minutes to complete and you can check the current status it:
+
+ ```azurecli
+ az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state
+ ```
+
+ >[!TIP]
+ >If the registration is stuck in an intermediate state for longer than 15 minutes to complete, unregister and then re-register the flag:
+ >
+ >```azurecli
+ >az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"
+ >az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"
+ >```
+
+1. Check if the `vmware `extension is installed:
+
+ ```azurecli
+ az extension show --name vmware
+ ```
+
+ - If the extension is already installed, check if the version is **3.0.0**. If an older version is installed, update the extension:
+
+ ```azurecli
+ az extension update --name vmware
+ ```
+
+ - If it's not already installed, install it:
+
+ ```azurecli
+ az extension add --name vmware
+ ```
+
+3. Create and attach an iSCSI datastore in the Azure VMware Solution private cloud cluster using `Microsoft.StoragePool` provided iSCSI target:
+
+ ```azurecli
+ az vmware datastore disk-pool-volume create --name iSCSIDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud --target-id /subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/ResourceGroup1/providers/Microsoft.StoragePool/diskPools/mpio-diskpool/iscsiTargets/mpio-iscsi-target --lun-name lun0
+ ```
+
+ >[!TIP]
+ >You can display the help on the datastores:
+ >
+ > ```azurecli
+ > az vmware datastore -h
+ > ```
+
+
+4. Show the details of an iSCSI datastore in a private cloud cluster:
+
+ ```azurecli
+ az vmware datastore show --name MyCloudSANDatastore1 --resource-group MyResourceGroup --cluster -Cluster-1 --private-cloud MyPrivateCloud
+ ```
+
+5. List all the datastores in a private cloud cluster:
+
+ ```azurecli
+ az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud
+ ```
+
+## Delete an iSCSI datastore from your private cloud
+
+When you delete a private cloud datastore, the disk pool resources don't get deleted. There's no maintenance window required for this operation.
+
+1. Power off the VMs and remove all objects associated with the iSCSI datastores, which includes:
+
+ - VMs (remove from inventory)
+
+ - Templates
+
+ - Snapshots
+
+2. Delete the private cloud datastore:
+
+ ```azurecli
+ az vmware datastore delete --name MyCloudSANDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud
+ ```
+
+## Next steps
+
+Now that you've attached a disk pool to your Azure VMware Solution hosts, you may want to learn about:
+
+- [Managing an Azure disk pool](../virtual-machines/disks-pools-manage.md ). Once you've deployed a disk pool, there are various management actions available to you. You can add or remove a disk to or from a disk pool, update iSCSI LUN mapping, or add ACLs.
+
+- [Deleting a disk pool](/azure/virtual-machines/disks-pools-deprovision#delete-a-disk-pool). When you delete a disk pool, all the resources in the managed resource group are also deleted.
+
+- [Disabling iSCSI support on a disk](/azure/virtual-machines/disks-pools-deprovision#disable-iscsi-support). If you disable iSCSI support on a disk pool, you effectively can no longer use a disk pool.
+
+- [Moving disk pools to a different subscription](../virtual-machines/disks-pools-move-resource.md). Move an Azure disk pool to a different subscription, which involves moving the disk pool itself, contained disks, managed resource group, and all the resources.
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
In the planning phase, you defined whether to use an *existing* or *new* Express
:::image type="content" source="media/connect-expressroute-vnet-workflow.png" alt-text="Diagram showing the workflow for connecting Azure Virtual Network to ExpressRoute in Azure VMware Solution." border="false":::
+>[!IMPORTANT]
+>[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
+ ### Use a new ExpressRoute virtual network gateway >[!IMPORTANT]
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
This network segment is used primarily for testing purposes during the initial d
## Define the virtual network gateway
->[!IMPORTANT]
->You can connect to a virtual network gateway in an Azure Virtual WAN, but it is out of scope for this quick start.
+An Azure VMware Solution private cloud requires an Azure Virtual Network and an ExpressRoute circuit.
-An Azure VMware Solution private cloud requires an Azure Virtual Network and an ExpressRoute circuit.
+>[!IMPORTANT]
+>[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)] You can connect to a virtual network gateway in an Azure Virtual WAN, but it is out of scope for this quick start.
Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after you create your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway, and for planning purposes, make note of which ExpressRoute virtual network gateway you'll use.
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
Last updated 04/23/2021
# Tutorial: Configure networking for your VMware private cloud in Azure
-An Azure VMware Solution private cloud requires an Azure Virtual Network. Because Azure VMware Solution doesn't support your on-premises vCenter, extra steps for integration with your on-premises environment are needed. Setting up an ExpressRoute circuit and a virtual network gateway is also required.
+An Azure VMware Solution private cloud requires an Azure Virtual Network. Because Azure VMware Solution doesn't support your on-premises vCenter, extra steps are needed for integration with your on-premises environment. Setting up an ExpressRoute circuit and a virtual network gateway is also required.
+ In this tutorial, you learn how to:
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
When deploying a private cloud, you receive IP addresses for vCenter and NSX-T M
The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and provide North-South connectivity to the internet and Azure services.
+>[!IMPORTANT]
+>[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
+ ## Routing and subnet considerations The Azure VMware Solution private cloud is connected to your Azure virtual network using an Azure ExpressRoute connection. This high bandwidth, low latency connection allows you to access services running in your Azure subscription from your private cloud environment. The routing is Border Gateway Protocol (BGP) based, automatically provisioned, and enabled by default for each private cloud deployment.
backup Restore Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-blobs-storage-account-cli.md
az dataprotection backup-instance restore initialize-for-item-recovery --datasou
Use the [az dataprotection backup-instance restore trigger](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_trigger) command to trigger the restore with the request prepared above. ```azurecli-interactive
-az dataprotection backup-instance restore trigger -g testBkpVaultRG --vault-name TestBkpVault --backup-instance-name CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036 --parameters restore.json
+az dataprotection backup-instance restore trigger -g testBkpVaultRG --vault-name TestBkpVault --backup-instance-name CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036 --restore-request-object restore.json
``` ## Tracking job
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-create-host-powershell.md
Previously updated : 10/14/2020 Last updated : 07/12/2021 # Customer intent: As someone with a networking background, I want to create an Azure Bastion host.
This article shows you how to create an Azure Bastion host using PowerShell. Once you provision the Azure Bastion service in your virtual network, the seamless RDP/SSH experience is available to all of the VMs in the same virtual network. Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine.
-Optionally, you can create an Azure Bastion host by using the [Azure portal](./tutorial-create-host-portal.md).
+Optionally, you can create an Azure Bastion host by using the following methods:
+* [Azure portal](./tutorial-create-host-portal.md)
+* [Azure CLI](create-host-cli.md)
+ ## Prerequisites
Verify that you have an Azure subscription. If you don't already have an Azure s
[!INCLUDE [PowerShell](../../includes/vpn-gateway-cloud-shell-powershell-about.md)]
- >[!NOTE]
- >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ > [!NOTE]
+ > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
> ## <a name="createhost"></a>Create a bastion host This section helps you create a new Azure Bastion resource using Azure PowerShell.
-1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. This is different than a Gateway subnet. You must use a subnet of at least /27 or larger subnet (/27, /26, and so on). Create the **AzureBastionSubnet** without any route tables or delegations. If you use Network Security Groups on the **AzureBastionSubnet**, refer to the [Work with NSGs](bastion-nsg.md) article.
+1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. This is different than a VPN gateway subnet.
+
+ [!INCLUDE [Note about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
```azurepowershell-interactive $subnetName = "AzureBastionSubnet"
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-faq.md
Previously updated : 06/22/2021 Last updated : 07/12/2021 # Azure Bastion FAQ
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-overview.md
Previously updated : 06/22/2021 Last updated : 07/12/2021
Azure Bastion is a service you deploy that lets you connect to a virtual machine
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH.
-## Architecture
-Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. Once you provision an Azure Bastion service in your virtual network, the RDP/SSH experience is available to all your VMs in the same virtual network.
+## <a name="key"></a>Key benefits
+
+* **RDP and SSH directly in Azure portal:** You can get to the RDP and SSH session directly in the Azure portal using a single click seamless experience.
+* **Remote Session over TLS and firewall traversal for RDP/SSH:** Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. You get your RDP/SSH session over TLS on port 443, enabling you to traverse corporate firewalls securely.
+* **No Public IP required on the Azure VM:** Azure Bastion opens the RDP/SSH connection to your Azure virtual machine using private IP on your VM. You don't need a public IP on your virtual machine.
+* **No hassle of managing NSGs:** Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines.
+* **Protection against port scanning:** Because you do not need to expose your virtual machines to the public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network.
+* **Protect against zero-day exploits. Hardening in one place only:** Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you donΓÇÖt need to worry about hardening each of the virtual machines in your virtual network. The Azure platform protects against zero-day exploits by keeping the Azure Bastion hardened and always up to date for you.
+
+## <a name="sku"></a>SKUs
+
+Azure Bastion has two available SKUs, Basic and Standard. The Standard SKU is currently in Preview. For more information, including how to upgrade a SKU, see the [Configuration settings](configuration-settings.md#skus) article.
+
+The following table shows features and corresponding SKUs.
++
+## <a name="architecture"></a>Architecture
+
+Azure Bastion is deployed to a virtual network and supports virtual network peering. Specifically, Azure Bastion manages RDP/SSH connectivity to VMs created in the local or peered virtual networks.
RDP and SSH are some of the fundamental means through which you can connect to your workloads running in Azure. Exposing RDP/SSH ports over the Internet isn't desired and is seen as a significant threat surface. This is often due to protocol vulnerabilities. To contain this threat surface, you can deploy bastion hosts (also known as jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the bastion, as well as further inside the network.
-![Azure Bastion Architecture](./media/bastion-overview/architecture.png)
This figure shows the architecture of an Azure Bastion deployment. In this diagram:
This figure shows the architecture of an Azure Bastion deployment. In this diagr
* With a single click, the RDP/SSH session opens in the browser. * No public IP is required on the Azure VM.
-## Key features
+## <a name="host-scaling"></a>Host scaling
-The following features are available:
+Azure Bastion supports manual host scaling. You can configure the number of host instances (scale units) in order to manage the number of concurrent RDP/SSH connections that Azure Bastion can support. Increasing the number of host instances lets Azure Bastion manage more concurrent sessions. Decreasing the number of instances decreases the number of concurrent supported sessions. Azure Bastion supports up to 50 host instances. This feature is available for the Azure Bastion Standard SKU only.
-* **RDP and SSH directly in Azure portal:** You can directly get to the RDP and SSH session directly in the Azure portal using a single click seamless experience.
-* **Remote Session over TLS and firewall traversal for RDP/SSH:** Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device, so that you get your RDP/SSH session over TLS on port 443 enabling you to traverse corporate firewalls securely.
-* **No Public IP required on the Azure VM:** Azure Bastion opens the RDP/SSH connection to your Azure virtual machine using private IP on your VM. You don't need a public IP on your virtual machine.
-* **No hassle of managing NSGs:** Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs on Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines.
-* **Protection against port scanning:** Because you do not need to expose your virtual machines to public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network.
-* **Protect against zero-day exploits. Hardening in one place only:** Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you donΓÇÖt need to worry about hardening each of the virtual machines in your virtual network. The Azure platform protects against zero-day exploits by keeping the Azure Bastion hardened and always up to date for you.
+For more information, see the [Configuration settings](configuration-settings.md#instance) article.
+
+## <a name="pricing"></a>Pricing
+
+Azure Bastion pricing involves a combination of hourly pricing based on SKU, scale units, and data transfer rates. Pricing information can be found on the [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion) page.
## <a name="new"></a>What's new? Subscribe to the RSS feed and view the latest Azure Bastion feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=Azure%20Bastion) page.
-## FAQ
+## Bastion FAQ
For frequently asked questions, see the Bastion [FAQ](bastion-faq.md).
bastion Bastion Vm Full Screen https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-vm-full-screen.md
Previously updated : 02/03/2020 Last updated : 07/12/2021 # Customer intent: I want to manage my VM experience using Azure Bastion.
-# Change to full screen view for a vm session: Azure Bastion
+# Change to full screen view for a VM session: Azure Bastion
This article helps you change the virtual machine view to full screen and back in your browser. Before you work with a VM, make sure you have followed the steps to [Create a Bastion host](./tutorial-create-host-portal.md). Then, connect to the VM that you want to work with using either [RDP](bastion-connect-vm-rdp.md) or [SSH](bastion-connect-vm-ssh.md).
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configuration-settings.md
+
+ Title: 'About Azure Bastion configuration settings'
+description: Learn about the available configuration settings for Azure Bastion.
+++++ Last updated : 07/12/2021++++
+# About Bastion configuration settings
+
+The sections in this article discuss the resources and settings for Azure Bastion.
+
+## <a name="skus"></a>SKUs
+
+A SKU is also known as a Tier. Azure Bastion supports two SKU types: Basic and Standard. The SKU is configured in the Azure portal during the workflow when you configure Bastion. You can [upgrade a Basic SKU to a Standard SKU](#upgradesku).
+
+* The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to Virtual Machines (VMs) without exposing public IP addresses on the target application VMs.
+* The **Standard SKU** is in **Preview**. The Standard SKU enables premium features that allow Azure Bastion to manage remote connectivity at a larger scale.
+
+The following table shows features and corresponding SKUs.
++
+### Configuration methods
+
+During Preview, you must use the Azure portal if you want to specify the Standard SKU. If you use the Azure CLI or Azure PowerShell to configure Bastion, the SKU can't be specified and defaults to the Basic SKU.
+
+| Method | Value | Links |
+| | | |
+| Azure portal | Tier - Basic or <br>Standard (Preview) | [Quickstart - Configure Bastion from VM settings](quickstart-host-portal.md)<br>[Tutorial - Configure Bastion](tutorial-create-host-portal.md) |
+| Azure PowerShell | Basic only - no settings |[Configure Bastion - PowerShell](bastion-create-host-powershell.md) |
+| Azure CLI | Basic only - no settings | [Configure Bastion - CLI](create-host-cli.md) |
+
+### <a name="upgradesku"></a>Upgrade a SKU
+
+Azure Bastion supports upgrading from a Basic to a Standard SKU. However, downgrading from Standard to Basic is not supported. To downgrade, you must delete and recreate Azure Bastion. The Standard SKU is in Preview.
+
+#### Configuration methods
+
+You can configure this setting using the following method:
+
+| Method | Value | Links |
+| | | |
+| Azure portal |Tier | [Upgrade a SKU - Preview](upgrade-sku.md)|
+
+## <a name="instance"></a>Instances and host scaling (Preview)
+
+An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**.
+
+Each instance can support 10-12 concurrent RDP/SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
+
+Instances are created in the AzureBastionSubnet. For host scaling, the AzureBastionSubnet should be /26 or larger. Using a smaller subnet limits the number of instances you can create. For more information about the AzureBastionSubnet, see the [subnets](#subnet) section in this article.
+
+### Configuration methods
+
+You can configure this setting using the following method:
+
+| Method | Value | Links |
+| | | |
+| Azure portal |Instance count | [Configure host scaling - Preview](configure-host-scaling.md)|
++
+## <a name="subnet"></a>Azure Bastion subnet
+
+Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. This subnet needs to be created in the same Virtual Network that Azure Bastion is deployed to. The subnet must have the following configuration:
+
+* Subnet name must be *AzureBastionSubnet*.
+* Subnet size must be /27 or larger (/26, /25 etc.).
+* For host scaling, a /26 or larger subnet is recommended. Using a smaller subnet space limits the number of scale units. For more information, see the [Host scaling](#instance) section of this article.
+* The subnet must be in the same VNet and resource group as the bastion host.
+* The subnet cannot contain additional resources.
+
+### Configuration methods
+
+You can configure this setting using the following methods:
+
+| Method | Value | Links |
+| | | |
+| Azure portal | Subnet |[Quickstart - Configure Bastion from VM settings](quickstart-host-portal.md)<br>[Tutorial - Configure Bastion](tutorial-create-host-portal.md)|
+| Azure PowerShell | -subnetName|[cmdlet](/powershell/module/az.network/new-azbastion#parameters) |
+| Azure CLI | --subnet-name | [command](/cli/azure/network/vnet#az_network_vnet_create) |
+
+## <a name="public-ip"></a>Public IP address
+
+Azure Bastion requires a Public IP address. The Public IP must have the following configuration:
+
+* The Public IP address SKU must be **Standard**.
+* The Public IP address assignment/allocation method must be **Static**.
+* The Public IP address name is the resource name by which you want to refer to this public IP address.
+* You can choose to use a public IP address that you already created, as long as it meets the criteria required by Azure Bastion and is not already in use.
+
+### Configuration methods
+
+You can configure this setting using the following methods:
+
+| Method | Value | Links |
+| | | |
+| Azure portal | Public IP address |[Azure portal](https://portal.azure.com)|
+| Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) |
+| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip)
+|
+
+## Next steps
+
+For frequently asked questions, see the [Azure Bastion FAQ](bastion-faq.md).
bastion Configure Host Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configure-host-scaling.md
+
+ Title: 'Add scale units for host scaling'
+
+description: Learn how to add additional instances (scale units) to Azure Bastion.
+++++ Last updated : 07/12/2021+
+# Customer intent: As someone with a networking background, I want to configure host scaling.
+++
+# Configure host scaling (Preview)
+
+This article helps you add additional scale units (instances) to Azure Bastion in order to accommodate additional concurrent client connections. During Preview, this setting can be configured in the Azure portal only. For more information about host scaling, see [Configuration settings](configuration-settings.md#instance).
+
+## Configuration steps
+
+1. In the Azure portal, navigate to your Bastion host.
+1. Host scaling instance count requires Standard tier. On the **Configuration** page, for **Tier**, verify the tier is **Standard**. If the tier is Basic, select **Standard** from the dropdown.
+
+ :::image type="content" source="./media/configure-host-scaling/select-sku.png" alt-text="Screenshot of Select Tier." lightbox="./media/configure-host-scaling/select-sku.png":::
+1. To configure scaling, adjust the instance count. Each instance is a scale unit.
+
+ :::image type="content" source="./media/configure-host-scaling/instance-count.png" alt-text="Screenshot of Instance count slider." lightbox="./media/configure-host-scaling/instance-count.png":::
+1. Click **Apply** to apply changes.
+
+## Next steps
+
+* Read the [Bastion FAQ](bastion-faq.md) for additional information.
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/create-host-cli.md
Previously updated : 10/14/2020 Last updated : 07/12/2021 # Customer intent: As someone with a networking background, I want to create an Azure Bastion host.
This article shows you how to create an Azure Bastion host using Azure CLI. Once you provision the Azure Bastion service in your virtual network, the seamless RDP/SSH experience is available to all of the VMs in the same virtual network. Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine.
-Optionally, you can create an Azure Bastion host by using the [Azure portal](./tutorial-create-host-portal.md), or using [Azure PowerShell](bastion-create-host-powershell.md).
+Optionally, you can create an Azure Bastion host by using the following methods:
+* [Azure portal](./tutorial-create-host-portal.md)
+* [Azure PowerShell](bastion-create-host-powershell.md)
+ ## Prerequisites
Verify that you have an Azure subscription. If you don't already have an Azure s
[!INCLUDE [Cloud Shell CLI](../../includes/vpn-gateway-cloud-shell-cli.md)]
- >[!NOTE]
- >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ > [!NOTE]
+ > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
> ## <a name="createhost"></a>Create a bastion host
This section helps you create a new Azure Bastion resource using Azure CLI.
> [!NOTE] > As shown in the examples, use the `--location` parameter with `--resource-group` for every command to ensure that the resources are deployed together.
-1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. This is different than a Gateway subnet. You must use a subnet of at least /27 or larger subnet (/27, /26, and so on). Create the **AzureBastionSubnet** without any route tables or delegations. If you use Network Security Groups on the **AzureBastionSubnet**, refer to the [Work with NSGs](bastion-nsg.md) article.
+1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. This is different than a VPN gateway subnet.
+
+ [!INCLUDE [Note about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
```azurecli-interactive az network vnet create --resource-group MyResourceGroup --name MyVnet --address-prefix 10.0.0.0/16 --subnet-name AzureBastionSubnet --subnet-prefix 10.0.0.0/24 --location northeurope
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
Title: 'Quickstart: Configure Azure Bastion and connect to a VM via private IP address and a browser'
+ Title: 'Quickstart: Configure Bastion from VM settings'
-description: Learn how to create an Azure Bastion host from a virtual machine and connect to the VM securely through your browser via private IP address.
+description: Learn how to create an Azure Bastion host from virtual machine settings and connect to the VM securely through your browser via private IP address.
Previously updated : 06/29/2021 Last updated : 07/12/2021 # Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
-# Quickstart: Connect to a VM securely through a browser via private IP address
+# Quickstart: Configure Azure Bastion from VM settings
-You can connect to a virtual machine (VM) through your browser using the Azure portal and Azure Bastion. This quickstart article shows you how to configure Azure Bastion based on your VM settings. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. The VM doesn't need a public IP address, client software, agent, or a special configuration. If you don't need the public IP address on your VM for anything else, you can remove it. You then connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+This quickstart article shows you how to configure Azure Bastion based on your VM settings in the Azure portal, and then connect to a VM via private IP address. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. The VM doesn't need a public IP address, client software, agent, or a special configuration. If you don't need the public IP address on your VM for anything else, you can remove it. You then connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
## <a name="prereq"></a>Prerequisites
You can use the following example values when creating this configuration, or yo
| | | | Name | VNet1-bastion | | + Subnet Name | AzureBastionSubnet |
-| AzureBastionSubnet addresses | A subnet within your VNet address space with a /27 subnet mask. For example, 10.1.1.0/27. |
+| AzureBastionSubnet addresses | A subnet within your VNet address space with a subnet mask /27 or larger.<br> For example, 10.1.1.0/26. |
+| Tier/SKU | Standard |
| Public IP address | Create new | | Public IP address name | VNet1-ip | | Public IP address SKU | Standard |
There are a few different ways to configure a bastion host. In the following ste
:::image type="content" source="./media/quickstart-host-portal/select-bastion.png" alt-text="Screenshot of Use Bastion.":::
-1. On the **Connect using Azure Bastion** page, configure the values.
+1. On the **Connect using Azure Bastion** page, **Step 1**, the values are pre-populated because you are creating the bastion host directly from your VM.
- * **Step 1:** The values are pre-populated because you are creating the bastion host directly from your VM.
+ :::image type="content" source="./media/quickstart-host-portal/create-step-1.png" alt-text="Screenshot of step 1 prepopulated settings." lightbox="./media/quickstart-host-portal/create-step-1.png":::
- * **Step 2:** The address space is pre-populated with a suggested address space. The AzureBastionSubnet must have an address space of /27 or larger (/26, /25, etc.)..
+1. On the **Connect using Azure Bastion** page, **Step 2**, configure the subnet values. The AzureBastionSubnet address space is pre-populated with a suggested address space. The AzureBastionSubnet must have an address space of /27 or larger (/26, /25, etc.). We recommend using a /26 so that host scaling is not limited. When you finish configuring this setting, click **Create Subnet** to create the AzureBastionSubnet.
- :::image type="content" source="./media/quickstart-host-portal/create-subnet.png" alt-text="Screenshot of create the Bastion subnet.":::
+ :::image type="content" source="./media/quickstart-host-portal/create-subnet.png" alt-text="Screenshot of create the Bastion subnet.":::
-1. Click **Create Subnet** to create the AzureBastionSubnet.
1. After the subnet creates, the page advances automatically to **Step 3**. For Step 3, use the following values: * **Name:** Name the bastion host.
+ * **Tier:** The tier is the SKU. For this exercise, select **Standard** from the dropdown. Selecting the Standard SKU lets you configure the instance count for host scaling. The Basic SKU doesn't support host scaling. For more information, see [Configuration settings - SKU](configuration-settings.md#skus). The Standard SKU is in Preview.
+ * **Instance count:** This is the setting for host scaling. Use the slider to configure. If you specify the Basic tier SKU, you are limited to 2 instances and cannot configure this setting. For more information, see [Configuration settings - host scaling](configuration-settings.md#instance). Instance count is in Preview and relies on the Standard SKU. In this quickstart, you can select the instance count you'd prefer, keeping in mind any scale unit [pricing](https://azure.microsoft.com/pricing/details/azure-bastion) considerations.
* **Public IP address:** Select **Create new**. * **Public IP address name:** The name of the Public IP address resource.
- * **Public IP address SKU:** Pre-configured as **Standard**
+ * **Public IP address SKU:** Pre-configured as **Standard**.
* **Assignment:** Pre-configured to **Static**. You can't use a Dynamic assignment for Azure Bastion. * **Resource group:** The same resource group as the VM.
- :::image type="content" source="./media/quickstart-host-portal/create-bastion.png" alt-text="Screenshot of Step 3.":::
+ :::image type="content" source="./media/quickstart-host-portal/create-step-3.png" alt-text="Screenshot of Step 3.":::
1. After completing the values, select **Create Azure Bastion using defaults**. Azure validates your settings, then creates the host. The host and its resources take about 5 minutes to create and deploy. ## <a name="remove"></a>Remove VM public IP address
After Bastion has been deployed to the virtual network, the screen changes to th
1. Type the username and password for your virtual machine. Then, select **Connect**. :::image type="content" source="./media/quickstart-host-portal/connect.png" alt-text="Screenshot shows the Connect using Azure Bastion dialog.":::
-1. The RDP connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service.
+1. The RDP connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service. Click **Allow** when asked for permissions to the clipboard. This lets you use the remote clipboard arrows on the left of the screen.
* When you connect, the desktop of the VM may look different than the example screenshot. * Using keyboard shortcut keys while connected to a VM may not result in the same behavior as shortcut keys on a local computer. For example, when connected to a Windows VM from a Windows client, CTRL+ALT+END is the keyboard shortcut for CTRL+ALT+Delete on a local computer. To do this from a Mac while connected to a Windows VM, the keyboard shortcut is Fn+CTRL+ALT+Backspace.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
Previously updated : 06/29/2021 Last updated : 07/12/2021
-# Tutorial: Configure Bastion and connect to a Windows VM through a browser
+# Tutorial: Configure Bastion and connect to a Windows VM
This tutorial shows you how to connect to a virtual machine through your browser using Azure Bastion and the Azure portal. In this tutorial, using the Azure portal, you deploy Bastion to your virtual network. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. When you use Bastion to connect, the VM does not need a public IP address or special software. After deploying Bastion, you can remove the public IP address from your VM if it is not needed for anything else. Next, you connect to a VM via its private IP address using the Azure portal. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md).
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
>The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone. >
+### <a name="values"></a>Example values
+
+You can use the following example values when creating this configuration, or you can substitute your own.
+
+**Basic VNet and VM values:**
+
+|**Name** | **Value** |
+| | |
+| Virtual machine| TestVM |
+| Resource group | TestRG1 |
+| Region | East US |
+| Virtual network | VNet1 |
+| Address space | 10.1.0.0/16 |
+| Subnets | FrontEnd: 10.1.0.0/24 |
+
+**Azure Bastion values:**
+
+|**Name** | **Value** |
+| | |
+| Name | VNet1-bastion |
+| + Subnet Name | AzureBastionSubnet |
+| AzureBastionSubnet addresses | A subnet within your VNet address space with a subnet mask /27 or larger.<br> For example, 10.1.1.0/26. |
+| Tier/SKU | Standard |
+| Instance count (host scaling)| 3 or greater |
+| Public IP address | Create new |
+| Public IP address name | VNet1-ip |
+| Public IP address SKU | Standard |
+| Assignment | Static |
+ ## Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com).
Sign in to the [Azure portal](https://portal.azure.com).
This section helps you create the bastion object in your VNet. This is required in order to create a secure connection to a VM in the VNet.
-1. From the **Home** page, select **+ Create a resource**.
-1. On the **New** page, in the Search box, type **Bastion**, then select **Enter** to get to the search results. On the result for **Bastion**, verify that the publisher is Microsoft.
-1. Select **Create**.
+1. Type **Bastion** into the search.
+1. Under services, click **Bastions**.
+1. On the Bastions page, click **+ Create** to open the **Create a Bastion** page.
1. On the **Create a Bastion** page, configure a new Bastion resource.
- :::image type="content" source="./media/tutorial-create-host-portal/create.png" alt-text="Screenshot of Create a Bastion portal page." lightbox="./media/tutorial-create-host-portal/create-expand.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/review-create.png" alt-text="Screenshot of Create a Bastion portal page." lightbox="./media/tutorial-create-host-portal/create-expand.png":::
+
+### Project details
+
+* **Subscription**: The Azure subscription you want to use.
+
+* **Resource Group**: The Azure resource group in which the new Bastion resource will be created. If you don't have an existing resource group, you can create a new one.
+
+### Instance details
+
+* **Name**: The name of the new Bastion resource.
+
+* **Region**: The Azure public region in which the resource will be created.
+
+* **Tier:** The tier is also known as the **SKU**. For this tutorial, we select the **Standard** SKU from the dropdown. Selecting the Standard SKU lets you configure the instance count for host scaling. The Basic SKU doesn't support host scaling. For more information, see [Configuration settings - SKU](configuration-settings.md#skus). The Standard SKU is in Preview.
+
+* **Instance count:** This is the setting for **host scaling** and configured in scale unit increments. Use the slider to configure the instance count. If you specified the Basic tier SKU, you cannot configure this setting. For more information, see [Configuration settings - host scaling](configuration-settings.md#instance). In this tutorial, you can select the instance count you'd prefer, keeping in mind any scale unit [pricing](https://azure.microsoft.com/pricing/details/azure-bastion) considerations.
+
+### Configure virtual networks
+
+* **Virtual network**: The virtual network in which the Bastion resource will be created. You can create a new virtual network in the portal during this process, or use an existing virtual network. If you are using an existing virtual network, make sure the existing virtual network has enough free address space to accommodate the Bastion subnet requirements. If you don't see your virtual network from the dropdown, make sure you have selected the correct Resource Group.
+
+* **Subnet**: Once you create or select a virtual network, the subnet field appears on the page. This is the subnet in which your Bastion instances will be deployed.
+
+#### Add the AzureBastionSubnet
+
+In most cases, you will not already have an AzureBastionSubnet configured. To configure the bastion subnet:
+
+1. Select **Manage subnet configuration**. This takes you to the **Subnets** page.
+
+ :::image type="content" source="./media/tutorial-create-host-portal/subnet.png" alt-text="Screenshot of Manage subnet configuration.":::
+1. On the **Subnets** page, select **+Subnet** to open the **Add subnet** page.
+
+1. Create a subnet using the following guidelines:
+
+ * The subnet must be named **AzureBastionSubnet**.
+ * The subnet must be at least /27 or larger. For the Standard SKU, we recommend /26 or larger to accommodate future additional host scaling instances.
+
+ :::image type="content" source="./media/tutorial-create-host-portal/bastion-subnet.png" alt-text="Screenshot of the AzureBastionSubnet subnet.":::
+
+1. You don't need to fill out additional fields on this page. Select **Save** at the bottom of the page to save the settings and close the **Add subnet** page.
+
+1. At the top of the **Subnets** page, select **Create a Bastion** to return to the Bastion configuration page.
+
+ :::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion.":::
- * **Subscription**: The Azure subscription you want to use to create a new Bastion resource.
- * **Resource Group**: The Azure resource group in which the new Bastion resource will be created. If you don't have an existing resource group, you can create a new one.
- * **Name**: The name of the new Bastion resource.
- * **Region**: The Azure public region that the resource will be created in.
- * **Virtual network**: The virtual network in which the Bastion resource will be created. You can create a new virtual network in the portal during this process, or use an existing virtual network. If you are using an existing virtual network, make sure the existing virtual network has enough free address space to accommodate the Bastion subnet requirements. If you don't see your virtual network from the dropdown, make sure you have selected the correct Resource Group.
- * **Subnet**: Once you create or select a virtual network, the subnet field will appear. The subnet in your virtual network where the new Bastion host will be deployed. The subnet will be dedicated to the Bastion host. Select **Manage subnet configuration** and create the Azure Bastion subnet. Select **+Subnet** and create a subnet using the following guidelines:
+### Public IP address
- * The subnet must be named **AzureBastionSubnet**.
- * The subnet must be at least /27 or larger.
+The public IP address of the Bastion resource on which RDP/SSH will be accessed (over port 443). Create a **new public IP address**. The public IP address must be in the same region as the Bastion resource you are creating. This IP address does not have anything to do with any of the VMs that you want to connect to. It's the public IP address for the Bastion host resource.
- You don't need to fill out additional fields. Select **OK** and then, at the top of the page, select **Create a Bastion** to return to the Bastion configuration page.
- * **Public IP address**: The public IP address of the Bastion resource on which RDP/SSH will be accessed (over port 443). Create a new public IP address. The public IP address must be in the same region as the Bastion resource you are creating. This IP address does not have anything to do with any of the VMs that you want to connect to. It's the public IP address for the Bastion host resource.
- * **Public IP address name**: The name of the public IP address resource. For this tutorial, you can leave the default.
- * **Public IP address SKU**: This setting is prepopulated by default to **Standard**. Azure Bastion uses/supports only the Standard public IP SKU.
- * **Assignment**: This setting is prepopulated by default to **Static**.
+ * **Public IP address name**: The name of the public IP address resource. For this tutorial, you can leave the default.
+ * **Public IP address SKU**: This setting is prepopulated by default to **Standard**. Azure Bastion uses/supports only the Standard public IP SKU.
+ * **Assignment**: This setting is prepopulated by default to **Static**.
-1. When you have finished specifying the settings, select **Review + Create**. This validates the values. Once validation passes, you can create the Bastion resource.
+### Review and create
- :::image type="content" source="./media/tutorial-create-host-portal/validation.png" alt-text="Screenshot of validation page.":::
-1. Review your settings. Next, at the bottom of the page, select **Create**.
+1. When you finish specifying the settings, select **Review + Create**. This validates the values. Once validation passes, you can create the Bastion resource.
+1. Review your settings.
+1. At the bottom of the page, select **Create**.
1. You will see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 5 minutes for the Bastion resource to be created and deployed. ## Remove VM public IP address
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/upgrade-sku.md
+
+ Title: 'Upgrade a SKU'
+
+description: Learn how to change Tiers from the Basic to the Standard SKU.
+++++ Last updated : 07/12/2021+
+# Customer intent: As someone with a networking background, I want to upgrade to the Standard SKU.
+++
+# Upgrade a SKU (Preview)
+
+This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you upgrade, you can't revert back to the Basic SKU without deleting and reconfiguring Bastion. During Preview, this setting can be configured in the Azure portal only. For more information about host scaling, see [Configuration settings- SKUs](configuration-settings.md#skus).
+
+## Configuration steps
+
+1. In the Azure portal, navigate to your Bastion host.
+1. On the **Configuration** page, for **Tier**, select **Standard** from the dropdown.
+
+ :::image type="content" source="./media/upgrade-sku/select-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/select-sku-expand.png":::
+
+1. Click **Apply** to apply changes.
+
+## Next steps
+
+* See [Configuration settings](configuration-settings.md) for additional configuration information.
+* Read the [Bastion FAQ](bastion-faq.md).
cdn Cdn Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-traffic-manager.md
After you configure your CDN and Traffic Manager profiles, follow these steps to
> > [!NOTE]
- > For implemeting this fail over scenerio both endpoints needs to be in different profiles, and the different profiles should be by different CDN provider to avoid domain name conflicts.
+ > For implementing this fail over scenario both endpoints need to be in different profiles, and the different profiles should be by different CDN providers to avoid domain name conflicts.
> 2. From your Azure CDN profile, select the first CDN endpoint (Akamai). Select **Add custom domain** and input **cdndemo101.dustydogpetcare.online**. Verify that the checkmark to validate the custom domain is green.
cognitive-services Define Custom Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Custom-Search/define-custom-suggestions.md
Last updated 02/12/2019-+ # Configure your custom autosuggest experience
cognitive-services Faq Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-voice-assistants.md
- Title: Voice assistants frequently asked questions-
-description: Get answers to the most popular questions about voice assistants using Custom Commands or the Direct Line Speech channel.
------ Previously updated : 11/05/2019---
-# Voice assistants frequently asked questions
-
-If you can't find answers to your questions in this document, check out [other support options](../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext%253fcontext%253d%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
-
-## General
-
-**Q: What is a voice assistant?**
-
-**A:** Like Cortana, a voice assistant is a solution that listens to a user's spoken utterances, analyzes the contents of those utterances for meaning, performs one or more actions in response to the utterance's intent, and then provides a response to the user that often includes a spoken component. It's a "voice-in, voice-out" experience for interacting with a system. voice assistant authors create an on-device application using the `DialogServiceConnector` in the Speech SDK to communicate with an assistant created using [Custom Commands](custom-commands.md) or the [Direct Line Speech](direct-line-speech.md) channel of the Bot Framework. These assistants can use custom keywords, custom speech, and custom voice to provide an experience tailored to your brand or product.
-
-**Q: Should I use Custom Commands or Direct Line Speech? What's the difference?**
-
-**A:** [Custom Commands](custom-commands.md) is a lower-complexity set of tools to easily create and host an assistant that's well-suited to task completion scenarios. [Direct Line Speech](direct-line-speech.md) provides richer, more sophisticated capabilities that can enable robust conversational scenarios. See the [comparison of assistant solutions](voice-assistants.md#choosing-an-assistant-solution) for more information.
-
-**Q: How do I get started?**
-
-**A:** The best way to begin with creating a Custom Commands (Preview) application or basic Bot Framework bot.
--- [Create a Custom Commands (Preview) application](./quickstart-custom-commands-application.md)-- [Create a basic Bot Framework bot](/azure/bot-service/bot-builder-tutorial-basic-deploy)-- [Connect a bot to the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech)-
-## Debugging
-
-**Q: Where's my channel secret?**
-
-**A:** If you've used the preview version of Direct Line Speech or you're reading related documentation, you may expect to find a secret key on the Direct Line Speech channel registration page. The v1.7 `DialogServiceConfig` factory method `FromBotSecret` in the Speech SDK also expects this value.
-
-The latest version of Direct Line Speech simplifies the process of contacting your bot from a device. On the channel registration page, the drop-down at the top associates your Direct Line Speech channel registration with a speech resource. Once associated, the v1.8 Speech SDK includes a `BotFrameworkConfig::FromSubscription` factory method that will configure a `DialogServiceConnector` to contact the bot you've associated with your subscription.
-
-If you're still migrating your client application from v1.7 to v1.8, `DialogServiceConfig::FromBotSecret` may continue to work with a non-empty, non-null value for its channel secret parameter, e.g. the previous secret you used. It will simply be ignored when using a speech subscription associated with a newer channel registration. Please note that the value _must_ be non-null and non-empty, as these are checked for on the device before the service-side association is relevant.
-
-For a more detailed guide, please see the [tutorial section](tutorial-voice-enable-your-bot-speech-sdk.md#register-the-direct-line-speech-channel) that walks through channel registration.
-
-**Q: I get a 401 error when connecting and nothing works. I know my speech subscription key is valid. What's going on?**
-
-**A:** When managing your subscription on the Azure portal, please ensure you're using the **Speech** resource (Microsoft.CognitiveServicesSpeechServices, "Speech") and _not_ the **Cognitive Services** resource (Microsoft.CognitiveServicesAllInOne, "All Cognitive Services"). Also, please check [Speech service region support for voice assistants](regions.md#voice-assistants).
-
-![correct subscription for direct line speech](media/voice-assistants/faq-supported-subscription.png "example of a compatible Speech subscription")
-
-**Q: I get recognition text back from my `DialogServiceConnector`, but I see a '1011' error and nothing from my bot. Why?**
-
-**A:** This error indicates a communication problem between your assistant and the voice assistant service.
--- For Custom Commands, ensure that your Custom Commands Application is published-- For Direct Line Speech, ensure that you've [connected your bot to the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech), [added Streaming protocol support](/azure/bot-service/directline-speech-bot) to your bot (with the related Web Socket support), and then check that your bot is responding to incoming requests from the channel.-
-**Q: This code still doesn't work and/or I'm getting a different error when using a `DialogServiceConnector`. What should I do?**
-
-**A:** File-based logging provides substantially more detail and can help accelerate support requests. To enable this functionality, see [how to use file logging](how-to-use-logging.md).
-
-## Next steps
--- [Troubleshooting](troubleshooting.md)-- [Release notes](releasenotes.md)
cognitive-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md
To start developing a voice assistant for Windows, you will need to mak
## Obtain resources from Microsoft
-Some resources necessary for a completely customized voice agent on Windows will require resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.md#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development.
+Some resources necessary for a completely customized voice agent on Windows will require resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development.
- **Keyword model:** Voice activation requires a keyword model from Microsoft in the form of a .bin file. The .bin file provided in the UWP Voice Assistant Sample is trained on the keyword *Contoso*. - **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they are protected under Limited Access Feature restrictions. To use a Limited Access Feature, you will need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.
These are the requirements to create a basic dialog service using Direct Line Sp
## Try out the sample app
-With your Speech Services subscription key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.md#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials.
+With your Speech Services subscription key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials.
## Create your own voice assistant for Windows
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following are the supported content types for the `interpret-as` and `format
| `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option". | | `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. For example, "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format`. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." | | `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. The following are valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
+| `name` | | The text is spoken as a person name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd] instead of spell out. <br />This is useful for Chinese person name reading becuase some characters pronounce differently when they work as family name. The speech synthesis engine pronounces 仇 in <br /><br />`<say-as interpret-as="address">仇先生</say-as>`<br /><br /> As [qiú] intead of [chóu]. |
**Usage**
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
If you get an error message in your main app window, use this table to identify
|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure you [checked the "Enable Streaming Endpoint"](#register-the-direct-line-speech-channel) box and/or [toggled **Web sockets**](#enable-web-sockets) to On.<br>Make sure your Azure App Service is running. If it is, try restarting your App Service.| |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in its output Activity [Speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field, but the Azure region associated with your Speech subscription key does not support neural voices. See [Neural and standard voices](./regions.md#neural-and-standard-voices).|
-If your issue isn't addressed in the table, see [Voice assistants: Frequently asked questions](faq-voice-assistants.md). If your are still not able to resolve your issue after following all the steps in this tutorial, please enter a new issue in the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues).
+If your issue isn't addressed in the table, see [Voice assistants: Frequently asked questions](faq-voice-assistants.yml). If your are still not able to resolve your issue after following all the steps in this tutorial, please enter a new issue in the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues).
#### A note on connection time out
cognitive-services Windows Voice Assistants Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/windows-voice-assistants-faq.md
- Title: Voice Assistants on Windows - FAQ-
-description: Common questions that frequently come up during Windows voice agent development.
------ Previously updated : 04/15/2020---
-# Samples and FAQs
-
-## The UWP Voice Assistant Sample
-
-The UWP Voice Assistant Sample is a voice assistant on Windows that serves to
--- demonstrate the use of the Windows ConversationalAgent APIs-- display best practices for a voice agent on Windows-- provide an adaptable app and reusable components for your MVP agent application-- show how Direct Line Speech, along with Bot Framework or Custom Commands, can be used together with the Windows ConversationalAgent APIs for an end-to-end voice agent experience-
-The documentation provided with the sample app walks through the code path of voice activation and agent management, from the prerequisites of startup through proper cleanup.
-
-> [!div class="nextstepaction"]
-> [Visit the GitHub repo for the UWP Sample](https://aka.ms/MVA/sample)
-
-## Frequently asked questions
-
-### How do I contact Microsoft for resources like Limited Access Feature tokens and keyword model files?
-
-Contact winvoiceassistants@microsoft.com to request these resources.
-
-### My app is showing in a small window when I activate it by voice. How can I transition from the compact view to a full application window?
-
-When your application is first activated by voice, it is started in a compact view. Please read the [Design guidance for voice activation preview](windows-voice-assistants-best-practices.md#design-guidance-for-voice-activation-preview) for guidance on the different views and transitions between them for voice assistants on Windows.
-
-To make the transition from compact view to full app view, use the appView API `TryEnterViewModeAsync`:
-
-`var appView = ApplicationView.GetForCurrentView();
- await appView.TryEnterViewModeAsync(ApplicationViewMode.Default);`
-
-### Why are voice assistant features on Windows only enabled for UWP applications?
-
-Given the privacy risks associated with features like voice activation, the features of the UWP platform are necessary allow the voice assistant features on Windows to be sufficiently secure.
-
-### The UWP Voice Assistant Sample uses Direct Line Speech. Do I have to use Direct Line Speech for my voice assistant on Windows?
-
-The UWP Sample Application was developed using Direct Line Speech and the Speech Services SDK as a demonstration of how to use a dialog service with the Windows Conversational Agent capability. However, you can use any service for local and cloud keyword verification, speech-to-text conversion, bot dialog, and text-to-speech conversion.
cognitive-services Windows Voice Assistants Implementation Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/windows-voice-assistants-implementation-guide.md
To properly close the application programmatically while above or below lock, us
## Next steps > [!div class="nextstepaction"]
-> [Visit the UWP Voice Assistant Sample app for examples and code walk-throughs](windows-voice-assistants-faq.md#the-uwp-voice-assistant-sample)
+> [Visit the UWP Voice Assistant Sample app for examples and code walk-throughs](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample)
cognitive-services Windows Voice Assistants Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/windows-voice-assistants-overview.md
The keyword spotter that triggers the application to start has achieved low powe
- **Review the design guidelines:** Our [design guidelines](windows-voice-assistants-best-practices.md) lay out the key work required to provide the best possible experiences for voice activation on Windows 10. - **Visit the Getting Started page:** Start [here](how-to-windows-voice-assistants-get-started.md) for the steps to begin implementing voice assistants on Windows, from setting your development environment through an introduction to implementation guide.-- **Try out the sample app**: To experience these capabilities firsthand, visit the [UWP Voice Assistant Sample](windows-voice-assistants-faq.md#the-uwp-voice-assistant-sample) page and follow the steps to get the sample client running.
+- **Try out the sample app**: To experience these capabilities firsthand, visit the [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) page and follow the steps to get the sample client running.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
Previously updated : 07/07/2021 Last updated : 07/12/2021
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with r
### GA release updates
-* General availability for Text Analytics for health for both containers and hosted API (/health).
-* General availability for Opinion Mining.
-* General availability for PII extraction and redaction.
-* General availability for Asynchronous (`/analyze`) endpoint.
-* Updated [quickstart](quickstarts/client-libraries-rest-api.md) examples.
+* General availability for [Text Analytics for health](how-tos/text-analytics-for-health.md?tabs=ner) for both containers and hosted API (/health).
+* General availability for [Opinion Mining](how-tos/text-analytics-how-to-sentiment-analysis.md?tabs=version-3-1#opinion-mining).
+* General availability for [PII extraction and redaction](how-tos/text-analytics-how-to-entity-linking.md?tabs=version-3-1#personally-identifiable-information-pii).
+* General availability for [Asynchronous (`/analyze`) endpoint](how-tos/text-analytics-how-to-call-api.md?tabs=synchronous#using-the-api-asynchronously).
+* Updated [quickstart](quickstarts/client-libraries-rest-api.md) examples with new SDK.
## June 2021
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
If you wish to call ACS' APIs manually using an access key, then you will need t
Managed Identities, provides superior security and ease of use over other authorization options. For example, by using Azure AD, you avoid having to store your account access key within your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible.
-To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/managed-identity-from-cli.md). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [managed identity](../quickstarts/managed-identity.md) is used.
+To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [managed identity](../quickstarts/identity/service-principal.md) is used.
### User Access Tokens
User access tokens are generated using the Identity SDK and are associated with
> [!div class="nextstepaction"] > [Create and manage Communication Services resources](../quickstarts/create-communication-resource.md)
-> [Create an Azure Active Directory managed identity application from the Azure CLI](../quickstarts/managed-identity-from-cli.md)
+> [Create an Azure Active Directory managed identity application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md)
> [Create User Access Tokens](../quickstarts/access-tokens.md) For more information, see the following articles:
communication-services Service Principal From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal-from-cli.md
+
+ Title: Create an Azure Active Directory Service Principal from the Azure CLI
+
+description: In this quick start we'll create an application and service principal to authenticate with Azure Communication Services.
++++ Last updated : 06/30/2021++++
+# Authorize access with Azure Active Directory to your communication resource in your development environment
+
+The Azure Identity SDK provides Azure Active Directory (Azure AD) token authentication support for Azure SDK packages. The latest versions of the Azure Communication Services SDKs for .NET, Java, Python, and JavaScript integrate with the Azure Identity library to provide a simple and secure means to acquire an OAuth 2.0 token for authorization of Azure Communication Services requests.
+
+An advantage of the Azure Identity SDK is that it enables you to use the same code to authenticate across multiple services whether your application is running in the development environment or in Azure.
+
+The Azure Identity SDK can authenticate with many methods. In Development we'll be using a service principal tied to a registered application, with credentials stored in Environnement Variables this is suitable for testing and development.
+
+## Prerequisites
+
+ - Azure CLI. [Installation guide](/cli/azure/install-azure-cli)
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+
+## Setting Up
+
+When using Active Directory for other Azure Resources, you should be using Managed identities. To learn how to enable managed identities for Azure Resources, see one of these articles:
+
+- [Azure portal](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+- [Azure PowerShell](../../../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)
+- [Azure CLI](../../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)
+- [Azure Resource Manager template](../../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)
+- [Azure Resource Manager SDKs](../../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+- [App services](../../../app-service/overview-managed-identity.md)
+
+## Authenticate a registered application in the development environment
+
+If your development environment does not support single sign-on or login via a web browser, then you can use a registered application to authenticate from the development environment.
+
+### Creating an Azure Active Directory Registered Application
+
+To create a registered application from the Azure CLI, you need to be logged in to the Azure account where you want the operations to take place. To do this, you can use the `az login` command and enter your credentials in the browser. Once you are logged in to your Azure account from the CLI, we can call the `az ad sp create-for-rbac` command to create the registered application and service principal.
+
+The following examples uses the Azure CLI to create a new registered application
+
+```azurecli
+az ad sp create-for-rbac --name <application-name>
+```
+
+The `az ad sp create-for-rbac` command will return a list of service principal properties in JSON format. Copy these values so that you can use them to create the necessary environment variables in the next step.
+
+```json
+{
+ "appId": "generated-app-ID",
+ "displayName": "service-principal-name",
+ "name": "http://service-principal-uri",
+ "password": "generated-password",
+ "tenant": "tenant-ID"
+}
+```
+> [!IMPORTANT]
+> Azure role assignments may take a few minutes to propagate.
+
+#### Set environment variables
+
+The Azure Identity SDK reads values from three environment variables at runtime to authenticate the application. The following table describes the value to set for each environment variable.
+
+| Environment variable | Value |
+| | - |
+| `AZURE_CLIENT_ID` | `appId` value from the generated JSON |
+| `AZURE_TENANT_ID` | `tenant` value from the generated JSON |
+| `AZURE_CLIENT_SECRET` | `password` value from the generated JSON |
+
+> [!IMPORTANT]
+> After you set the environment variables, close and re-open your console window. If you are using Visual Studio or another development environment, you may need to restart it in order for it to register the new environment variables.
+
+Once these variables have been set, you should be able to use the DefaultAzureCredential object in your code to authenticate to the service client of your choice.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about authentication](../../concepts/authentication.md)
+
+You may also want to:
+
+- [Learn more about Azure Identity library](/dotnet/api/overview/azure/identity-readme)
communication-services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal.md
+
+ Title: Use Azure Active Directory in Communication Services
+
+description: Azure Active Directory lets you authorize Azure Communication Services access from applications running in Azure VMs, function apps, and other resources.
++++ Last updated : 06/30/2021++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Use Azure Active Directory with Communication Services
+Get started with Azure Communication Services by using Azure Active Directory. The Communication Services Identity and SMS SDKs support Azure Active Directory (Azure AD) authentication.
+
+This quickstart shows you how to authorize access to the Identity and SMS SDKs from an Azure environment that supports Active Directory. It also describes how to test your code in a development environment by creating a service principal for your work.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+- An active Azure Communication Services resource, see [create a Communication Services resource](../create-communication-resource.md) if you do not have one.
+- To send an SMS you will need a [Phone Number](../telephony-sms/get-phone-number.md).
+- A setup Service Principal for a development environment, see [Authorize access with managed identity](./service-principal-from-cli.md)
+++++
+## Next steps
+
+- [Learn more about Azure role-based access control](../../../../articles/role-based-access-control/index.yml)
+- [Learn more about Azure identity library for .NET](/dotnet/api/overview/azure/identity-readme)
+- [Creating user access tokens](../../quickstarts/access-tokens.md)
+- [Send an SMS message](../../quickstarts/telephony-sms/send.md)
+- [Learn more about SMS](../../concepts/telephony-sms/concepts.md)
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity.md
- Title: Use managed identities in Communication Services-
-description: Managed identities let you authorize Azure Communication Services access from applications running in Azure VMs, function apps, and other resources.
---- Previously updated : 06/30/2021--
-zone_pivot_groups: acs-js-csharp-java-python
--
-# Use managed identities
-Get started with Azure Communication Services by using managed identities. The Communication Services Identity and SMS SDKs support Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-
-This quickstart shows you how to authorize access to the Identity and SMS SDKs from an Azure environment that supports managed identities. It also describes how to test your code in a development environment.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)-- An active Azure Communication Services resource, see [create a Communication Services resource](./create-communication-resource.md) if you do not have one.-- To send an SMS you will need a [Phone Number](./telephony-sms/get-phone-number.md).-- A setup managed identity for a development environment, see [Authorize access with managed identity](./managed-identity-from-cli.md)-----
-## Next steps
--- [Learn more about Azure role-based access control](../../../articles/role-based-access-control/index.yml)-- [Learn more about Azure identity library for .NET](/dotnet/api/overview/azure/identity-readme)-- [Creating user access tokens](../quickstarts/access-tokens.md)-- [Send an SMS message](../quickstarts/telephony-sms/send.md)-- [Learn more about SMS](../concepts/telephony-sms/concepts.md)
container-registry Allow Access Trusted Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/allow-access-trusted-services.md
Where indicated, access by the trusted service requires additional configuration
|Trusted service |Supported usage scenarios | Configure managed identity with RBAC role |||| | Azure Security Center | Vulnerability scanning by [Azure Defender for container registries](scan-images-defender.md) | No |
-|ACR Tasks | [Access a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes |
+|ACR Tasks | [Access the parent registry or a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes |
|Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-docker-image.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image | Yes | |Azure Container Registry | [Import images from another Azure container registry](container-registry-import-images.md#import-from-an-azure-container-registry-in-the-same-ad-tenant) | No |
Here's a typical workflow to enable an instance of a trusted service to access a
The following example demonstrates using ACR Tasks as a trusted service. See [Cross-registry authentication in an ACR task using an Azure-managed identity](container-registry-tasks-cross-registry-authentication.md) for task details.
-1. Create or update an Azure container registry, and [push a sample base image](container-registry-tasks-cross-registry-authentication.md#prepare-base-registry) to the registry. This registry is the *base registry* for the scenario.
-1. In a second Azure container registry, [define](container-registry-tasks-cross-registry-authentication.md#define-task-steps-in-yaml-file) and [create](container-registry-tasks-cross-registry-authentication.md#option-2-create-task-with-system-assigned-identity) an ACR task to pull an image from the base registry. Enable a system-assigned managed identity when creating the task.
-1. Assign the task identity [an Azure role to access the base registry](container-registry-tasks-authentication-managed-identity.md#3-grant-the-identity-permissions-to-access-other-azure-resources). For example, assign the AcrPull role, which has permissions to pull images.
-1. [Add managed identity credentials](container-registry-tasks-authentication-managed-identity.md#4-optional-add-credentials-to-the-task) to the task.
-1. To confirm that the task bypasses network restrictions, [disable public access](container-registry-access-selected-networks.md#disable-public-network-access) in the base registry.
-1. Run the task. If the base registry and task are configured properly, the task runs successfully, because the base registry allows access.
+1. Create or update an Azure container registry.
+[Create](container-registry-tasks-cross-registry-authentication.md#option-2-create-task-with-system-assigned-identity) an ACR task.
+ * Enable a system-assigned managed identity when creating the task.
+ * Disable default auth mode (`--auth-mode None`) of the task.
+1. Assign the task identity [an Azure role to access the registry](container-registry-tasks-authentication-managed-identity.md#3-grant-the-identity-permissions-to-access-other-azure-resources). For example, assign the AcrPush role, which has permissions to pull and push images.
+2. [Add managed identity credentials for the registry](container-registry-tasks-authentication-managed-identity.md#4-optional-add-credentials-to-the-task) to the task.
+3. To confirm that the task bypasses network restrictions, [disable public access](container-registry-access-selected-networks.md#disable-public-network-access) in the registry.
+4. Run the task. If the registry and task are configured properly, the task runs successfully, because the registry allows access.
To test disabling access by trusted
-1. In the base registry, disable the setting to allow access by trusted services.
-1. Run the task again. In this case, the task run fails, because the base registry no longer allows access by the task.
+1. Disable the setting to allow access by trusted services.
+1. Run the task again. In this case, the task run fails, because the registry no longer allows access by the task.
## Next steps
cosmos-db Create Website https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-website.md
The resulting deployment has a fully functional web application that can connect
## Step 1: Deploy the template
-First, select the **Deploy to Azure** button below to open the Azure portal to create a custom deployment. You can also view the Azure Resource Management template from the [Azure Quickstart Templates Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)
+First, select the **Deploy to Azure** button below to open the Azure portal to create a custom deployment. You can also view the Azure Resource Manager template from the [Azure Quickstart Templates Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)
[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-webapp%2Fazuredeploy.json)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/kafka-connector-sink.md
+
+ Title: Kafka Connect for Azure Cosmos DB - Sink connector
+description: The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
++++ Last updated : 06/28/2021+++
+# Kafka Connect for Azure Cosmos DB - Sink connector
+
+Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
+
+## Prerequisites
+
+* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you do not wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You will also need to install and configure the Azure Cosmos DB connectors manually.
+* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md)
+* Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1.
+* Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
+* Download [Maven](https://maven.apache.org/download.cgi)
+
+## Install sink connector
+
+If you are using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB sink connector is included in the installation, and you can skip this step.
+
+Otherwise, you can download the JAR file from the latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) or package this repo to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code.
+
+```bash
+# clone the kafka-connect-cosmosdb repo if you haven't done so already
+git clone https://github.com/microsoft/kafka-connect-cosmosdb.git
+cd kafka-connect-cosmosdb
+
+# package the source code into a JAR file
+mvn clean package
+
+# include the following JAR file in Kafka Connect installation
+ls target/*dependencies.jar
+```
+
+## Create a Kafka topic and write data
+
+If you are using the Confluent Platform, the easiest way to create a Kafka topic is by using the supplied Control Center UX. Otherwise, you can create a Kafka topic manually using the following syntax:
+
+```bash
+./kafka-topics.sh --create --zookeeper <ZOOKEEPER_URL:PORT> --replication-factor <NO_OF_REPLICATIONS> --partitions <NO_OF_PARTITIONS> --topic <TOPIC_NAME>
+```
+
+For this scenario, we will create a Kafka topic named ΓÇ£hotelsΓÇ¥ and will write non-schema embedded JSON data to the topic. To create a topic inside Control Center, see the [Confluent guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
+
+Next, start the Kafka console producer to write a few records to the ΓÇ£hotelsΓÇ¥ topic.
+
+```powershell
+# Option 1: If using Codespaces, use the built-in CLI utility
+kafka-console-producer --broker-list localhost:9092 --topic hotels
+
+# Option 2: Using this repo's Confluent Platform setup, first exec into the broker container
+docker exec -it broker /bin/bash
+kafka-console-producer --broker-list localhost:9092 --topic hotels
+
+# Option 3: Using your Confluent Platform setup and CLI install
+<path-to-confluent>/bin/kafka-console-producer --broker-list <kafka broker hostname> --topic hotels
+```
+
+In the console producer, enter:
+
+```json
+{"id": "h1", "HotelName": "Marriott", "Description": "Marriott description"}
+{"id": "h2", "HotelName": "HolidayInn", "Description": "HolidayInn description"}
+{"id": "h3", "HotelName": "Motel8", "Description": "Motel8 description"}
+```
+
+The three records entered are published to the ΓÇ£hotelsΓÇ¥ Kafka topic in JSON format.
+
+## Create the sink connector
+
+Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector. Make sure to replace the values for `connect.cosmos.connection.endpoint` and `connect.cosmos.master.key`, properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
+
+Refer to the [sink properties](#sink-configuration-properties) section for more information on each of these configuration properties.
+
+```json
+{
+ "name": "cosmosdb-sink-connector",
+ "config": {
+ "connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
+ "tasks.max": "1",
+ "topics": [
+ "hotels"
+ ],
+ "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "value.converter.schemas.enable": "false",
+ "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "key.converter.schemas.enable": "false",
+ "connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
+ "connect.cosmos.master.key": "<cosmosdbprimarykey>",
+ "connect.cosmos.databasename": "kafkaconnect",
+ "connect.cosmos.containers.topicmap": "hotels#kafka"
+ }
+}
+```
+
+Once you have all the values filled out, save the JSON file somewhere locally. You can use this file to create the connector using the REST API.
+
+### Create connector using Control Center
+
+An easy option to create the connector is by going through the Control Center webpage. Follow this [installation guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. Instead of using the `DatagenConnector` option, use the `CosmosDBSinkConnector` tile instead. When configuring the sink connector, fill out the values as you have filled in the JSON file.
+
+Alternatively, in the connectors page, you can upload the JSON file created earlier by using the **Upload connector config file** option.
++
+### Create connector using REST API
+
+Create the sink connector using the Connect REST API:
+
+```bash
+# Curl to Kafka connect service
+curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors
+
+```
+
+## Confirm data written to Cosmos DB
+
+Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account.
+
+## Cleanup
+
+To delete the connector from the Control Center, navigate to the sink connector you created and click the **Delete** icon.
++
+Alternatively, use the Connect REST API to delete:
+
+```bash
+# Curl to Kafka connect service
+curl -X DELETE http://localhost:8083/connectors/cosmosdb-sink-connector
+```
+
+To delete the created Azure Cosmos DB service and its resource group using Azure CLI, refer to these [steps](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md#cleanup).
+
+## <a id="sink-configuration-properties"></a>Sink configuration properties
+
+The following settings are used to configure the Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config]( https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
+
+| Name | Type | Description | Required/Optional |
+| : | : | : | : |
+| Topics | list | A list of Kafka topics to watch. | Required |
+| connector.class | string | Class name of the Azure Cosmos DB sink. It should be set to `com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector`. | Required |
+| connect.cosmos.connection.endpoint | uri | Azure Cosmos endpoint URI string. | Required |
+| connect.cosmos.master.key | string | The Azure Cosmos DB primary key that the sink connects with. | Required |
+| connect.cosmos.databasename | string | The name of the Azure Cosmos database the sink writes to. | Required |
+| connect.cosmos.containers.topicmap | string | Mapping between Kafka topics and Azure Cosmos DB containers, formatted using CSV as shown: `topic#container,topic2#container2`. | Required |
+| key.converter | string | Serialization format for the key data written into Kafka topic. | Required |
+| value.converter | string | Serialization format for the value data written into the Kafka topic. | Required |
+| key.converter.schemas.enable | string | Set to "true" if the key data has embedded schema. | Optional |
+| value.converter.schemas.enable | string | Set to "true" if the key data has embedded schema. | Optional |
+| tasks.max | int | Maximum number of connector sink tasks. Default is `1` | Optional |
+
+Data will always be written to the Azure Cosmos DB as JSON without any schema.
+
+## Supported data types
+
+The Azure Cosmos DB sink connector converts sink record into JSON document supporting the following schema types:
+
+| Schema type | JSON data type |
+| : | : |
+| Array | Array |
+| Boolean | Boolean |
+| Float32 | Number |
+| Float64 | Number |
+| Int8 | Number |
+| Int16 | Number |
+| Int32 | Number |
+| Int64 | Number|
+| Map | Object (JSON)|
+| String | String<br> Null |
+| Struct | Object (JSON) |
+
+The sink Connector also supports the following AVRO logical types:
+
+| Schema Type | JSON Data Type |
+| : | : |
+| Date | Number |
+| Time | Number |
+| Timestamp | Number |
+
+> [!NOTE]
+> Byte deserialization is currently not supported by the Azure Cosmos DB sink connector.
+
+## Single Message Transforms(SMT)
+
+Along with the sink connector settings, you can specify the use of Single Message Transformations (SMTs) to modify messages flowing through the Kafka Connect platform. For more information, refer to the [Confluent SMT Documentation](https://docs.confluent.io/platform/current/connect/transforms/overview.html).
+
+### Using the InsertUUID SMT
+
+You can use InsertUUID SMT to automatically add item IDs. With the custom `InsertUUID` SMT, you can insert the `id` field with a random UUID value for each message, before it is written to Azure Cosmos DB.
+
+> [!WARNING]
+> Use this SMT only if the messages donΓÇÖt contain the `id` field. Otherwise, the `id` values will be overwritten and you may end up with duplicate items in your database. Using UUIDs as the message ID can be quick and easy but are [not an ideal partition key](https://stackoverflow.com/questions/49031461/would-using-a-substring-of-a-guid-in-cosmosdb-as-partitionkey-be-a-bad-idea) to use in Azure Cosmos DB.
+
+### Install the SMT
+
+Before you can use the `InsertUUID` SMT, you will need to install this transform in your Confluent Platform setup. If you are using the Confluent Platform setup from this repo, the transform is already included in the installation, and you can skip this step.
+
+Alternatively, you can package the [InsertUUID source](https://github.com/confluentinc/kafka-connect-insert-uuid) to create a new JAR file. To install the connector manually using the JAR file, refer to these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually).
+
+```bash
+# clone the kafka-connect-insert-uuid repo
+https://github.com/confluentinc/kafka-connect-insert-uuid.git
+cd kafka-connect-insert-uuid
+
+# package the source code into a JAR file
+mvn clean package
+
+# include the following JAR file in Confluent Platform installation
+ls target/*.jar
+```
+
+### Configure the SMT
+
+Inside your sink connector config, add the following properties to set the `id`.
+
+```json
+"transforms": "insertID",
+"transforms.insertID.type": "com.github.cjmatta.kafka.connect.smt.InsertUuid$Value",
+"transforms.insertID.uuid.field.name": "id"
+```
+
+For more information on using this SMT, see the [InsertUUID repository](https://github.com/confluentinc/kafka-connect-insert-uuid).
+
+### Using SMTs to configure Time to live (TTL)
+
+Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-azure-portal) doc.
+
+Inside your Sink connector config, add the following properties to set the TTL in seconds. In this following example, the TTL is set to 100 seconds. If the message already contains the `TTL` field, the `TTL` value will be overwritten by these SMTs.
+
+```json
+"transforms": "insertTTL,castTTLInt",
+"transforms.insertTTL.type": "org.apache.kafka.connect.transforms.InsertField$Value",
+"transforms.insertTTL.static.field": "ttl",
+"transforms.insertTTL.static.value": "100",
+"transforms.castTTLInt.type": "org.apache.kafka.connect.transforms.Cast$Value",
+"transforms.castTTLInt.spec": "ttl:int32"
+```
+
+For more information on using these SMTs, see the [InsertField](https://docs.confluent.io/platform/current/connect/transforms/insertfield.html) and [Cast](https://docs.confluent.io/platform/current/connect/transforms/cast.html) documentation.
+
+## Troubleshooting common issues
+
+Here are solutions to some common problems that you may encounter when working with the Kafka sink connector.
+
+### Read non-JSON data with JsonConverter
+
+If you have non-JSON data on your source topic in Kafka and attempt to read it using the `JsonConverter`, you will see the following exception:
+
+```console
+org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
+…
+org.apache.kafka.common.errors.SerializationException: java.io.CharConversionException: Invalid UTF-32 character 0x1cfa7e2 (above 0x0010ffff) at char #1, byte #7)
+
+```
+
+This error is likely caused by data in the source topic being serialized in either Avro or another format such as CSV string.
+
+**Solution**: If the topic data is in AVRO format, then change your Kafka Connect sink connector to use the `AvroConverter` as shown below.
+
+```json
+"value.converter": "io.confluent.connect.avro.AvroConverter",
+"value.converter.schema.registry.url": "http://schema-registry:8081",
+```
+
+### Read non-Avro data with AvroConverter
+
+This scenario is applicable when you try to use the Avro converter to read data from a topic that is not in Avro format. Which, includes data written by an Avro serializer other than the Confluent Schema RegistryΓÇÖs Avro serializer, which has its own wire format.
+
+```console
+org.apache.kafka.connect.errors.DataException: my-topic-name
+at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)
+…
+org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
+org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
+
+```
+
+**Solution**: Check the source topicΓÇÖs serialization format. Then, either switch Kafka ConnectΓÇÖs sink connector to use the right converter or switch the upstream format to Avro.
+
+### Read a JSON message without the expected schema/payload structure
+
+Kafka Connect supports a special structure of JSON messages containing both payload and schema as follows.
+
+ ```json
+{
+ "schema": {
+ "type": "struct",
+ "fields": [
+ {
+ "type": "int32",
+ "optional": false,
+ "field": "userid"
+ },
+ {
+ "type": "string",
+ "optional": false,
+ "field": "name"
+ }
+ ]
+ },
+ "payload": {
+ "userid": 123,
+ "name": "Sam"
+ }
+}
+```
+
+If you try to read JSON data that does not contain the data in this structure, you will get the following error:
+
+```none
+org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
+```
+
+To be clear, the only JSON structure that is valid for `schemas.enable=true` has schema and payload fields as the top-level elements as shown above. As the error message states, if you just have plain JSON data, you should change your connectorΓÇÖs configuration to:
+
+```json
+"value.converter": "org.apache.kafka.connect.json.JsonConverter",
+"value.converter.schemas.enable": "false",
+```
+
+## Limitations
+
+* Autocreation of databases and containers in Azure Cosmos DB are not supported. The database and containers must already exist, and they must be configured correctly.
+
+## Next steps
+
+You can learn more about change feed in Azure Cosmo DB with the following docs:
+
+* [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)
+* [Reading from change feed](read-change-feed.md)
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/kafka-connector-source.md
+
+ Title: Kafka Connect for Azure Cosmos DB - Source connector
+description: Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic. Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB.
++++ Last updated : 06/28/2021+++
+# Kafka Connect for Azure Cosmos DB - Source connector
+
+Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB source connector provides the capability to read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic.
+
+## Prerequisites
+
+* Start with the [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md) because it gives you a complete environment to work with. If you do not wish to use Confluent Platform, then you need to install and configure Zookeeper, Apache Kafka, Kafka Connect, yourself. You will also need to install and configure the Azure Cosmos DB connectors manually.
+* Create an Azure Cosmos DB account, container [setup guide](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md)
+* Bash shell, which is tested on GitHub Codespaces, Mac, Ubuntu, Windows with WSL2. This shell doesnΓÇÖt work in Cloud Shell or WSL1.
+* Download [Java 11+](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
+* Download [Maven](https://maven.apache.org/download.cgi)
+
+## Install the source connector
+
+If you are using the recommended [Confluent platform setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Confluent_Platform_Setup.md), the Azure Cosmos DB source connector is included in the installation, and you can skip this step.
+
+Otherwise, you can use JAR file from latest [Release](https://github.com/microsoft/kafka-connect-cosmosdb/releases) and install the connector manually. To learn more, see these [instructions](https://docs.confluent.io/current/connect/managing/install.html#install-connector-manually). You can also package a new JAR file from the source code:
+
+```bash
+# clone the kafka-connect-cosmosdb repo if you haven't done so already
+git clone https://github.com/microsoft/kafka-connect-cosmosdb.git
+cd kafka-connect-cosmosdb
+
+# package the source code into a JAR file
+mvn clean package
+
+# include the following JAR file in Confluent Platform installation
+ls target/*dependencies.jar
+```
+
+## Create a Kafka topic
+
+Create a Kafka topic using Confluent Control Center. For this scenario, we will create a Kafka topic named "apparels" and write non-schema embedded JSON data to the topic. To create a topic inside the Control Center, see [create Kafka topic doc](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-2-create-ak-topics).
+
+## Create the source connector
+
+### Create the source connector in Kafka Connect
+
+To create the Azure Cosmos DB source connector in Kafka Connect, use the following JSON config. Make sure to replace the placeholder values for `connect.cosmos.connection.endpoint`, `connect.cosmos.master.key` properties that you should have saved from the Azure Cosmos DB setup guide in the prerequisites.
+
+```json
+{
+ "name": "cosmosdb-source-connector",
+ "config": {
+ "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector",
+ "tasks.max": "1",
+ "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "connect.cosmos.task.poll.interval": "100",
+ "connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
+ "connect.cosmos.master.key": "<cosmosdbprimarykey>",
+ "connect.cosmos.databasename": "kafkaconnect",
+ "connect.cosmos.containers.topicmap": "apparels#kafka",
+ "connect.cosmos.offset.useLatest": false,
+ "value.converter.schemas.enable": "false",
+ "key.converter.schemas.enable": "false"
+ }
+}
+```
+
+For more information on each of the above configuration properties, see the [source properties](#source-configuration-properties) section. Once you have all the values filled out, save the JSON file somewhere locally. You can use this file to create the connector using the REST API.
+
+#### Create connector using Control Center
+
+An easy option to create the connector is from the Confluent Control Center portal. Follow the [Confluent setup guide](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-3-install-a-ak-connector-and-generate-sample-data) to create a connector from Control Center. When setting up, instead of using the `DatagenConnector` option, use the `CosmosDBSourceConnector` tile instead. When configuring the source connector, fill out the values as you have filled in the JSON file.
+
+Alternatively, in the connectors page, you can upload the JSON file built from the previous section by using the **Upload connector config file** option.
++
+#### Create connector using REST API
+
+Create the source connector using the Connect REST API
+
+```bash
+# Curl to Kafka connect service
+curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> http://localhost:8083/connectors
+```
+
+## Insert document into Azure Cosmos DB
+
+1. Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account.
+1. Open the **Data Explore** tab and select **Databases**
+1. Open the "kafkaconnect" database and "kafka" container you created earlier.
+1. To create a new JSON document, in the SQL API pane, expand "kafka" container, select **Items**, then select **New Item** in the toolbar.
+1. Now, add a document to the container with the following structure. Paste the following sample JSON block into the Items tab, overwriting the current content:
+
+ ``` json
+
+ {
+ "id": "2",
+ "productId": "33218897",
+ "category": "Women's Outerwear",
+ "manufacturer": "Contoso",
+ "description": "Black wool pea-coat",
+ "price": "49.99",
+ "shipping": {
+ "weight": 2,
+ "dimensions": {
+ "width": 8,
+ "height": 11,
+ "depth": 3
+ }
+ }
+ }
+
+ ```
+
+1. Select **Save**.
+1. Confirm the document has been saved by viewing the Items on the left-hand menu.
+
+### Confirm data written to Kafka topic
+
+1. Open Kafka Topic UI on `<http://localhost:9000>`.
+1. Select the Kafka "apparels" topic you created.
+1. Verify that the document you inserted into Azure Cosmos DB earlier appears in the Kafka topic.
+
+### Cleanup
+
+To delete the connector from the Confluent Control Center, navigate to the source connector you created and select the **Delete** icon.
++
+Alternatively, use the connectorΓÇÖs REST API:
+
+```bash
+# Curl to Kafka connect service
+curl -X DELETE http://localhost:8083/connectors/cosmosdb-source-connector
+```
+
+To delete the created Azure Cosmos DB service and its resource group using Azure CLI, refer to these [steps](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/CosmosDB_Setup.md#cleanup).
+
+## Source configuration properties
+
+The following settings are used to configure the Kafka source connector. These configuration values determine which Azure Cosmos DB container is consumed, data from which Kafka topics is written, and formats to serialize the data. For an example with default values, see this [configuration file](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/source.example.json).
+
+| Name | Type | Description | Required/optional |
+| : | : | : | : |
+| connector.class | String | Class name of the Azure Cosmos DB source. It should be set to `com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector` | Required |
+| connect.cosmos.databasename | String | Name of the database to read from. | Required |
+| connect.cosmos.master.key | String | The Azure Cosmos DB primary key. | Required |
+| connect.cosmos.connection.endpoint | URI | The account endpoint. | Required |
+| connect.cosmos.containers.topicmap | String | Comma-separated topic to container mapping. For example, topic1#coll1, topic2#coll2 | Required |
+| connect.cosmos.messagekey.enabled | Boolean | This value represents if the Kafka message key should be set. Default value is `true` | Required |
+| connect.cosmos.messagekey.field | String | Use the field's value from the document as the message key. Default is `id`. | Required |
+| connect.cosmos.offset.useLatest | Boolean | Set to `true` to use the most recent source offset. Set to `false` to use the earliest recorded offset. Default value is `false`. | Required |
+| connect.cosmos.task.poll.interval | Int | Interval to poll the change feed container for changes. | Required |
+| key.converter | String | Serialization format for the key data written into Kafka topic. | Required |
+| value.converter | String | Serialization format for the value data written into the Kafka topic. | Required |
+| key.converter.schemas.enable | String | Set to `true` if the key data has embedded schema. | Optional |
+| value.converter.schemas.enable | String | Set to `true` if the key data has embedded schema. | Optional |
+| tasks.max | Int | Maximum number of connectors source tasks. Default value is `1`. | Optional |
+
+## Supported data types
+
+The Azure Cosmos DB source connector converts JSON document to schema and supports the following JSON data types:
+
+| JSON data type | Schema type |
+| : | : |
+| Array | Array |
+| Boolean | Boolean |
+| Number | Float32<br>Float64<br>Int8<br>Int16<br>Int32<br>Int64|
+| Null | String |
+| Object (JSON)| Struct|
+| String | String |
+
+## Next steps
+
+* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Kafka Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/kafka-connector.md
+
+ Title: Use Kafka Connect for Azure Cosmos DB to read and write data
+description: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. Kafka Connect is a tool for scalable and reliably streaming data between Apache Kafka and other systems
++++ Last updated : 06/28/2021+++
+# Kafka Connect for Azure Cosmos DB
+
+[Kafka Connect](http://kafka.apache.org/documentation.html#connect) is a tool for scalable and reliably streaming data between Apache Kafka and other systems. Using Kafka Connect you can define connectors that move large data sets into and out of Kafka. Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB.
+
+## Source & sink connectors semantics
+
+* **Source connector** - Currently this connector supports at-least once with multiple tasks and exactly once for single tasks.
+
+* **Sink connector** - This connector fully supports exactly once semantics.
+
+## Supported data formats
+
+The source and sink connectors can be configured to support the following data formats:
+
+| Format | Description |
+| :-- | :- |
+| Plain JSON | JSON record structure without any attached schema. |
+| JSON with schema | JSON record structure with explicit schema information to ensure the data matches the expected format. |
+| AVRO | A row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types, protocols, and serializes data in a compact binary format.
+
+The key and value settings, including the format and serialization can be independently configured in Kafka. So, it is possible to work with different data formats for keys and values, respectively. To cater for different data formats, there is converter configuration for both `key.converter` and `value.converter`.
+
+## Converter configuration examples
+
+### <a id="json-plain"></a>Plain JSON
+
+If you need to use JSON without schema registry for connect data, use the `JsonConverter` supported with Kafka. The following example shows the `JsonConverter` key and value properties that are added to the configuration:
+
+ ```java
+ key.converter=org.apache.kafka.connect.json.JsonConverter
+ key.converter.schemas.enable=false
+ value.converter=org.apache.kafka.connect.json.JsonConverter
+ value.converter.schemas.enable=false
+ ```
+
+### <a id="json-with-schema"></a>JSON with schema
+
+Set the properties `key.converter.schemas.enable` and `value.converter.schemas.enable` to true so that the key or value is treated as a composite JSON object that contains both an internal schema and the data. Without these properties, the key or value is treated as plain JSON.
+
+ ```java
+ key.converter=org.apache.kafka.connect.json.JsonConverter
+ key.converter.schemas.enable=true
+ value.converter=org.apache.kafka.connect.json.JsonConverter
+ value.converter.schemas.enable=true
+ ```
+
+The resulting message to Kafka would look like the example below, with schema and payload as top-level elements in the JSON:
+
+ ```json
+ {
+ "schema": {
+ "type": "struct",
+ "fields": [
+ {
+ "type": "int32",
+ "optional": false,
+ "field": "userid"
+ },
+ {
+ "type": "string",
+ "optional": false,
+ "field": "name"
+ }
+ ],
+ "optional": false,
+ "name": "ksql.users"
+ },
+ "payload": {
+ "userid": 123,
+ "name": "user's name"
+ }
+ }
+ ```
+
+> [!NOTE]
+> The message written to Azure Cosmos DB is made up of the schema and payload. Notice the size of the message, as well as the proportion of it that is made up of the payload vs. the schema. The schema is repeated in every message you write to Kafka. In scenarios like this, you may want to use a serialization format like JSON Schema or AVRO, where the schema is stored separately, and the message holds just the payload.
+
+### <a id="avro"></a>AVRO
+
+The Kafka Connector supports AVRO data format. To use AVRO format, configure a `AvroConverter` so that Kafka Connect knows how to work with AVRO data. Azure Cosmos DB Kafka Connect has been tested with the [AvroConverter](https://www.confluent.io/hub/confluentinc/kafka-connect-avro-converter) supplied by Confluent, under Apache 2.0 license. You can also use a different custom converter if you prefer.
+
+Kafka deals with keys and values independently. Specify the `key.converter` and `value.converter` properties as required in the worker configuration. When using `AvroConverter`, add an extra converter property that provides the URL for the schema registry. The following example shows the AvroConverter key and value properties that are added to the configuration:
+
+ ```java
+ key.converter=io.confluent.connect.avro.AvroConverter
+ key.converter.schema.registry.url=http://schema-registry:8081
+ value.converter=io.confluent.connect.avro.AvroConverter
+ value.converter.schema.registry.url=http://schema-registry:8081
+ ```
+
+## Choose a conversion format
+
+The following are some considerations on how to choose a conversion format:
+
+* When configuring a **Source connector**:
+
+ * If you want Kafka Connect to include plain JSON in the message it writes to Kafka, set [Plain JSON](#json-plain) configuration.
+
+ * If you want Kafka Connect to include the schema in the message it writes to Kafka, set [JSON with Schema](#json-with-schema) configuration.
+
+ * If you want Kafka Connect to include AVRO format in the message it writes to Kafka, set [AVRO](#avro) configuration.
+
+* If youΓÇÖre consuming JSON data from a Kafka topic into a **Sink connector**, understand how the JSON was serialized when it was written to the Kafka topic:
+
+ * If it was written with JSON serializer, set Kafka Connect to use the JSON converter `(org.apache.kafka.connect.json.JsonConverter)`.
+
+ * If the JSON data was written as a plain string, determine if the data includes a nested schema or payload. If it does, set [JSON with schema](#json-with-schema) configuration.
+ * However, if youΓÇÖre consuming JSON data and it doesnΓÇÖt have the schema or payload construct, then you must tell Kafka Connect **not** to look for a schema by setting `schemas.enable=false` as per [Plain JSON](#json-plain) configuration.
+
+ * If it was written with AVRO serializer, set Kafka Connect to use the AVRO converter `(io.confluent.connect.avro.AvroConverter)` as per [AVRO](#avro) configuration.
+
+## Configuration
+
+### Common configuration properties
+
+The source and sink connectors share the following common configuration properties:
+
+| Name | Type | Description | Required/Optional |
+| : | : | : | : |
+| connect.cosmos.connection.endpoint | uri | Cosmos endpoint URI string | Required |
+| connect.cosmos.master.key | string | The Azure Cosmos DB primary key that the sink connects with. | Required |
+| connect.cosmos.databasename | string | The name of the Azure Cosmos database the sink writes to. | Required |
+| connect.cosmos.containers.topicmap | string | Mapping between Kafka topics and Azure Cosmos DB containers. It is formatted using CSV as `topic#container,topic2#container2` | Required |
+
+For sink connector-specific configuration, see the [Sink Connector Documentation](kafka-connector-sink.md)
+
+For source connector-specific configuration, see the [Source Connector Documentation](kafka-connector-source.md)
+
+## Common configuration errors
+
+If you misconfigure the converters in Kafka Connect, it can result in errors. These errors will show up at the Kafka Connector sink because youΓÇÖll try to deserialize the messages already stored in Kafka. Converter problems donΓÇÖt usually occur in source because serialization is set at the source.
+
+For more information, see [common configuration errors](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#common-errors) doc.
+
+## Project setup
+
+Refer to the [Developer walkthrough and project setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Developer_Walkthrough.md) for initial setup instructions.
+
+## Performance testing
+
+For more information on the performance tests run for the sink and source connectors, see the [Performance testing document](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/Performance_Testing.md).
+
+Refer to the [Performance environment setup](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/perf/README.md) for exact steps on deploying the performance test environment for the connectors.
+
+## Resources
+
+* [Kafka Connect](http://kafka.apache.org/documentation.html#connect)
+* [Kafka Connect Deep Dive ΓÇô Converters and Serialization Explained](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/)
+
+## Next steps
+
+* Kafka Connect for Azure Cosmos DB [source connector](kafka-connector-source.md)
+* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Migrate Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-java-v4-sdk.md
> Because Azure Cosmos DB Java SDK v4 has up to 20% enhanced throughput, TCP-based direct mode, and support for the latest backend service features, we recommend you upgrade to v4 at the next opportunity. Continue reading below to learn more. >
-This article explains how to upgrade your existing Java application that is using an older Azure Cosmos DB Java SDK to the newer Azure Cosmos DB Java SDK 4.0 for Core (SQL) API. Azure Cosmos DB Java SDK v4 corresponds to the `com.azure.cosmos` package. You can use the instructions in this doc if you are migrating your application from any of the following Azure Cosmos DB Java SDKs:
+Update to the latest Azure Cosmos DB Java SDK to get the best of what Azure Cosmos DB has to offer - a managed non-relational database service with competitive performance, five-nines availability, one-of-a-kind resource governance, and more. This article explains how to upgrade your existing Java application that is using an older Azure Cosmos DB Java SDK to the newer Azure Cosmos DB Java SDK 4.0 for Core (SQL) API. Azure Cosmos DB Java SDK v4 corresponds to the `com.azure.cosmos` package. You can use the instructions in this doc if you are migrating your application from any of the following Azure Cosmos DB Java SDKs:
* Sync Java SDK 2.x.x * Async Java SDK 2.x.x
cost-management-billing Understand Mca Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/understand-mca-roles.md
The following table shows what role you need to complete tasks in the context of
## Manage billing roles in the Azure portal -- Assign a role to a user or group at a billing scope such as billing account, billing profile, or invoice section, where you want to give access.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Search for **Cost Management + Billing**.
+
+ ![Screenshot that shows Azure portal search](./media/understand-mca-roles/billing-search-cost-management-billing.png)
+
+3. Select **Access control (IAM)** at a scope such as billing account, billing profile, or invoice section, where you want to give access.
+
+4. The Access control (IAM) page lists users and groups that are assigned to each role for that scope.
+
+ ![Screenshot that shows list of admins for billing account](./media/understand-mca-roles/billing-list-admins.png)
+
+5. To give access to a user, Select **Add** from the top of the page. In the Role drop-down list, select a role. Enter the email address of the user to whom you want to give access. Select **Save** to assign the role.
+
+ ![Screenshot that shows adding an admin to a billing account](./media/understand-mca-roles/billing-add-admin.png)
+
+6. To remove access for a user, select the user with the role assignment you want to remove. Select Remove.
+
+ ![Screenshot that shows removing an admin from a billing account](./media/understand-mca-roles/billing-remove-admin.png)
## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
description: Learn how to troubleshoot connector issues in Azure Data Factory.
Previously updated : 06/24/2021 Last updated : 07/12/2021
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- The first row with white spaces is used as the column name. - The type OriginalType is supported. Try to avoid using these special characters: `,;{}()\n\t=`.
+### Error code: ParquetDateTimeExceedLimit
+
+- **Message**: `The Ticks value '%ticks;' for the datetime column must be between valid datetime ticks range -621355968000000000 and 2534022144000000000.`
+
+- **Cause**: If the datetime value is '0001-01-01 00:00:00', it could be caused by the difference between Julian Calendar and Gregorian Calendar. For more details, reference [Difference between Julian and proleptic Gregorian calendar dates](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar#Difference_between_Julian_and_proleptic_Gregorian_calendar_dates).
+
+- **Resolution**: Check the ticks value and avoid using the datetime value '0001-01-01 00:00:00'.
+
+### Error code: ParquetInvalidColumnName
+
+- **Message**: `The column name is invalid. Column name cannot contain these character:[,;{}()\n\t=]`
+
+- **Cause**: The column name contains invalid characters.
+
+- **Resolution**: Add or modify the column mapping to make the sink column name valid.
## REST
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-switch-activity.md
The Switch activity provides the same functionality that a switch statement provides in programming languages. It evaluates a set of activities corresponding to a case that matches the condition evaluation.
-> [!NOTE]
-> This section provides JSON definitions of Switch activity. Expressions for Switch, Cases etc. that evaluate to string should not contain '.' character which is a reserved character.
->
+ ## Syntax ```json
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
This article explores common troubleshooting methods for mapping data flows in A
- **Cause**: Invalid privacy configurations are provided. - **Recommendation**: Please update AdobeIntegration settings while only privacy 'GDPR' is supported.
+### Error code: DF-Executor-RemoteRPCClientDisassociated
+- **Message**: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues.
+- **Cause**: Data flow activity runs failed because of the transient network issue or because one node in spark cluster runs out of memory.
+- **Recommendation**: Use the following options to solve this problem:
+ - Option-1: Use a powerful cluster (both drive and executor nodes have enough memory to handle big data) to run data flow pipelines with setting "Compute type" to "Memory optimized". The settings are shown in the picture below.
+
+ :::image type="content" source="media/data-flow-troubleshoot-guide/configure-compute-type.png" alt-text="Screenshot that shows the configuration of Compute type.":::
+
+ - Option-2: Use larger cluster size (for example, 48 cores) to run your data flow pipelines. You can learn more about cluster size through this document: [Cluster size](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance#cluster-size).
+
+ - Option-3: Repartition your input data. For the task running on the data flow spark cluster, one partition is one task and runs on one node. If data in one partition is too large, the related task running on the node needs to consume more memory than the node itself, which causes failure. So you can use repartition to avoid data skew, and ensure that data size in each partition is average while the memory consumption is not too heavy.
+
+ :::image type="content" source="media/data-flow-troubleshoot-guide/configure-partition.png" alt-text="Screenshot that shows the configuration of partitions.":::
+
+ > [!NOTE]
+ > You need to evaluate the data size or the partition number of input data, then set reasonable partition number under "Optimize". For example, the cluster that you use in the data flow pipeline execution is 8 cores and the memory of each core is 20GB, but the input data is 1000GB with 10 partitions. If you directly run the data flow, it will meet the OOM issue because 1000GB/10 > 20GB, so it is better to set repartition number to 100 (1000GB/100 < 20GB).
+
+ - Option-4: Tune and optimize source/sink/transformation settings. For example, try to copy all files in one container, and don't use the wildcard pattern. For more detailed information, reference [Mapping data flows performance and tuning guide](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance).
++ ## Miscellaneous troubleshooting tips - **Issue**: Unexpected exception occurred and execution failed. - **Message**: During Data Flow activity execution: Hit unexpected exception and execution failed.
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delimited-text.md
The below table lists the properties supported by a delimited text source. You c
| Filter by last modified | Choose to filter files based upon when they were last altered | no | Timestamp | modifiedAfter <br> modifiedBefore | | Allow no files found | If true, an error is not thrown if no files are found | no | `true` or `false` | ignoreNoFilesFound |
+> [!NOTE]
+> Data flow sources support for list of files is limited to 1024 entries in your file. To include more files, use wildcards in your file list.
+ ### Source example The below image is an example of a delimited text source configuration in mapping data flows.
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
Finally, you download and install the latest version of self-hosted IR, as well
If you use OLEDB/ODBC/ADO.NET drivers for other database systems, such as PostgreSQL, MySQL, Oracle, and so on, you can download the 64-bit versions from their websites. - If you use data flow components from Azure Feature Pack in your packages, [download and install Azure Feature Pack for SQL Server 2017](https://www.microsoft.com/download/details.aspx?id=54798) on the same machine where your self-hosted IR is installed, if you haven't done so already.-- If you haven't done so already, [download and install the 64-bit version of Visual C++ (VC) runtime](https://www.microsoft.com/download/details.aspx?id=40784) on the same machine where your self-hosted IR is installed.
+- If you haven't done so already, [download and install the 64-bit version of Visual C++ (VC) runtime](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0) on the same machine where your self-hosted IR is installed.
### Enable Windows authentication for on-premises tasks
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Title: Forward alert information description: You can send alert information to partner systems by working with forwarding rules. Previously updated : 12/02/2020 Last updated : 07/12/2021
Relevant information is sent to partner systems when forwarding rules are create
## Create forwarding rules
-To create a new forwarding rule:
+**To create a new forwarding rule on a sensor**:
+
+1. Sign in to the sensor.
+
+1. Select **Forwarding** on the side menu.
+
+1. Select **Create Forwarding Rule**.
+
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/create-forwarding-rule-screen.png" alt-text="Create a Forwarding Rule icon.":::
+
+1. Enter a name for the forwarding rule.
+
+1. Select the severity level.
+
+1. Select any protocols to apply.
+
+1. Select which engines the rule should apply to.
+
+1. Select an action to apply, and fill in any parameters needed for the selected action.
+
+1. Add another action if desired.
+
+1. Select **Submit**.
+
+**To create a forwarding rule on the management console**:
+
+1. Sign in to the sensor.
1. Select **Forwarding** on the side menu.
- ::image type="content" source="media/how-to-work-with-alerts-sensor/create-forwarding-rule-screen.png" alt-text="Create a Forwarding Rule icon.":::
+1. Select the :::image type="icon" source="../media/how-to-work-with-alerts-sensor/plus-add-icon.png" border="false"::: icon.
+
+1. In the Create Forwarding Rule window, enter a name for the rule
+
+ :::image type="content" source="../media/how-to-work-with-alerts-sensor/management-console-create-forwarding-rule.png" alt-text="Enter a meaningful name in the name field of the Create Forwarding Rule window.":::
+
+1. Select the severity level from the drop-down menu.
+
+1. Select any protocols to apply.
-2. Select **Create Forwarding Rule**.
+1. Select which engines the rule should apply to.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/create-a-forwardong-rule.png" alt-text="Create a new forwarding rule.":::
+1. Select the checkbox if you want the forwarding to rule to report system notifications.
+
+1. Select the checkbox if you want the forwarding to rule to report alert notifications.
-3. Enter the name of the forwarding rule.
+1. Select **Add** to add an action to apply. Fill in any parameters needed for the selected action.
+
+1. Add another action if desired.
+
+1. Select **Save**.
### Forwarding rule criteria
Forwarding rule actions instruct the sensor to forward alert information to part
In addition to the forwarding actions delivered with your system, other actions might become available when you integrate with partner vendors.
-#### Email address action
+### Email address action
Send mail that includes the alert information. You can enter one email address per rule. To define email for the forwarding rule:
-1. Enter a single email address. If more than one mail needs to be sent, create another action.
+1. Enter a single email address. If you need to add more than one email, you will need to create another action for each email address.
-2. Enter the time zone for the time stamp for the alert detection at the SIEM.
+1. Enter the time zone for the time stamp for the alert detection at the SIEM.
-3. Select **Submit**.
+1. Select **Submit**.
-#### Syslog server actions
+### Syslog server actions
The following formats are supported:
Enter the following parameters:
| Date and time | Date and time that the syslog server machine received the information. | | Priority | User.Alert | | Hostname | Sensor IP |
-| Message | Sensor name: The name of the Azure Defender for IoT appliance. <br />LEEF:1.0 <br />Azure Defender for IoT <br />Sensor <br />Sensor version <br />Azure Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. Note that it might be different from the time of the syslog server machine. (This depends on the time-zone configuration.) <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
+| Message | Sensor name: The name of the Azure Defender for IoT appliance. <br />LEEF:1.0 <br />Azure Defender for IoT <br />Sensor <br />Sensor version <br />Azure Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine. (This depends on the time-zone configuration.) <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
After you enter all the information, select **Submit**.
-#### Webhook server action
+### Webhook server action
-Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered,the management console sends a HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
+Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends a HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
**To define to a webhook action:** 1. Select the Webhook action.
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/webhook.png" alt-text="Define a webhook forwarding rule.":::
+
+1. Enter the server address in the **URL** field.
+
+1. In the **Key** and **Value fields**, customize the HTTP header with a key and value definition. Keys can only contain letters, numbers, dashes, and underscores. Values can only contain one leading and/or one trailing space.
-1. Enter the server address in the **URL**field.
-1. In the **Key** and **Value**fields, customize the HTTP header with a key and value definition. Keys can only contain letters, numbers, dashes, and underscores. Values can only contain one leading and/or one trailing space.
1. Select **Save**.
-#### NetWitness action
+### NetWitness action
Send alert information to a NetWitness server.
To define NetWitness forwarding parameters:
1. Enter NetWitness **Hostname** and **Port** information.
-2. Enter the time zone for the time stamp for the alert detection at the SIEM.
+1. Enter the time zone for the time stamp for the alert detection at the SIEM.
:::image type="content" source="media/how-to-work-with-alerts-sensor/add-timezone.png" alt-text="Add a time zone to your forwarding rule.":::
-3. Select **Submit**.
+1. Select **Submit**.
-#### Integrated vendor actions
+### Integrated vendor actions
You might have integrated your system with a security, device management, or other industry vendor. These integrations let you:
Use the actions section to enter the credentials and other information required
For details about setting up forwarding rules for the integrations, refer to the relevant partner integration articles.
-### Test forwarding rules
+## Test forwarding rules
Test the connection between the sensor and the partner server that's defined in your forwarding rules: 1. Select the rule from the **Forwarding rule** dialog box.
-2. Select the **More** box.
+1. Select the **More** box.
-3. Select **Send Test Message**.
+1. Select **Send Test Message**.
-4. Go to your partner system to verify that the information sent by the sensor was received.
+1. Go to your partner system to verify that the information sent by the sensor was received.
-### Edit and delete forwarding rules
+## Edit and delete forwarding rules
To edit a forwarding rule:
To remove a forwarding rule:
- On the **Forwarding Rule** screen, select **Remove** under the **More** drop-down menu. In the **Warning** dialog box, select **OK**.
-### Forwarding rules and alert exclusion rules
+## Forwarding rules and alert exclusion rules
The administrator might have defined alert exclusion rules. These rules help administrators achieve more granular control over alert triggering by instructing the sensor to ignore alert events based on various parameters. These parameters might include device addresses, alert names, or specific sensors.
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/quickstart-create-data-migration-service-portal.md
You can clean up the resources created in this quickstart by deleting the [Azure
## Next steps
-* [Migrate SQL Server to Azure SQL Database offline](tutorial-sql-server-to-azure-sql.md)
+* [Migrate SQL Server to Azure SQL Database](tutorial-sql-server-to-azure-sql.md)
* [Migrate SQL Server to an Azure SQL Managed Instance offline](tutorial-sql-server-to-managed-instance.md) * [Migrate SQL Server to an Azure SQL Managed Instance online](tutorial-sql-server-managed-instance-online.md)
hdinsight Apache Spark Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-settings.md
spark.sql.files.maxPartitionBytes 1099511627776
spark.sql.files.openCostInBytes 1099511627776 ```
-The example shown above overrides several default values for five Spark configuration parameters. These values are the compression codec, Apache Hadoop MapReduce split minimum size and parquet block sizes. Also, the Spar SQL partition and open file sizes default values. These configuration changes are chosen because the associated data and jobs (in this example, genomic data) have particular characteristics. These characteristics will do better using these custom configuration settings.
+The example shown above overrides several default values for five Spark configuration parameters. These values are the compression codec, Apache Hadoop MapReduce split minimum size and parquet block sizes. Also, the Spark SQL partition and open file sizes default values. These configuration changes are chosen because the associated data and jobs (in this example, genomic data) have particular characteristics. These characteristics will do better using these custom configuration settings.
industrial-iot Industrial Iot Platform Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/industrial-iot-platform-versions.md
+
+ Title: Azure Industrial IoT platform versions
+description: This article provides an overview of the existing version of the Industrial IoT platform and their support.
++++ Last updated : 03/08/2021+
+# Azure Industrial IoT Platform v2.8.0 LTS
+
+We are pleased to announce the declaration of Long-Term Support (LTS) for version 2.8.0. While we continue to develop and release updates to our ongoing projects on GitHub, we now also offer a branch that will only get critical bug fixes and security updates starting in July 2021. Customers can rely upon a longer-term support lifecycle for these LTS builds, providing stability and assurance for the planning on longer time horizons our customers require. The LTS branch offers customers a guarantee that they will benefit from any necessary security or critical bug fixes with minimal impact to their deployments and module interactions. At the same time, customers can access the latest updates in the main branch to keep pace with the latest developments and fastest cycle time for product updates.
+
+## Version history
+
+|Version |Type |Date |Highlights |
+|-|--|-||
+|2.5.4 |Stable |March 2020 |IoT Hub Direct Method Interface, control from cloud without any additional microservices (standalone mode), OPC UA Server Interface, uses OPC Foundation's OPC stack - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.5.4)|
+|2.7.206 |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format as well as PubSub - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)|
+|2.8.0 |Long-term support (LTS)|July 2021 |IoT Edge update to 1.1 LTS, OPC stack logging and tracing for better OPC Publisher diagnostics, Security fixes|
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What is Industrial IoT?](overview-what-is-industrial-iot.md)
iot-accelerators Howto Opc Publisher Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-configure.md
- Title: Configure OPC Publisher - Azure | Microsoft Docs
-description: This article describes how to configure OPC Publisher to specify OPC UA node data changes, OPC UA events to publish and also the telemetry format.
-- Previously updated : 06/10/2019-------
-# Configure OPC Publisher
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-You can configure OPC Publisher to specify:
--- The OPC UA node data changes to publish.-- The OPC UA events to publish.-- The telemetry format.-
-You can configure OPC Publisher using configuration files or using method calls.
-
-## Use configuration files
-
-This section describes to options for configuring OPC UA node publishing with configuration files.
-
-### Use a configuration file to configure publishing data changes
-
-The easiest way to configure the OPC UA nodes to publish is with a configuration file. The configuration file format is documented in [publishednodes.json](https://github.com/Azure/iot-edge-opc-publisher/blob/master/opcpublisher/publishednodes.json) in the repository.
-
-Configuration file syntax has changed over time. OPC Publisher still reads old formats, but converts them into the latest format when it persists the configuration.
-
-The following example shows the format of the configuration file:
-
-```json
-[
- {
- "EndpointUrl": "opc.tcp://testserver:62541/Quickstarts/ReferenceServer",
- "UseSecurity": true,
- "OpcNodes": [
- {
- "Id": "i=2258",
- "OpcSamplingInterval": 2000,
- "OpcPublishingInterval": 5000,
- "DisplayName": "Current time"
- }
- ]
- }
-]
-```
-
-### Use a configuration file to configure publishing events
-
-To publish OPC UA events, you use the same configuration file as for data changes.
-
-The following example shows how to configure publishing for events generated by the [SimpleEvents server](https://github.com/OPCFoundation/UA-.NETStandard-Samples/tree/master/Workshop/SimpleEvents/Server). The SimpleEvents server can be found in the [OPC Foundation repository](https://github.com/OPCFoundation/UA-.NETStandard-Samples)
-is:
-
-```json
-[
- {
- "EndpointUrl": "opc.tcp://testserver:62563/Quickstarts/SimpleEventsServer",
- "OpcEvents": [
- {
- "Id": "i=2253",
- "DisplayName": "SimpleEventServerEvents",
- "SelectClauses": [
- {
- "TypeId": "i=2041",
- "BrowsePaths": [
- "EventId"
- ]
- },
- {
- "TypeId": "i=2041",
- "BrowsePaths": [
- "Message"
- ]
- },
- {
- "TypeId": "nsu=http://opcfoundation.org/Quickstarts/SimpleEvents;i=235",
- "BrowsePaths": [
- "/2:CycleId"
- ]
- },
- {
- "TypeId": "nsu=http://opcfoundation.org/Quickstarts/SimpleEvents;i=235",
- "BrowsePaths": [
- "/2:CurrentStep"
- ]
- }
- ],
- "WhereClause": [
- {
- "Operator": "OfType",
- "Operands": [
- {
- "Literal": "ns=2;i=235"
- }
- ]
- }
- ]
- }
- ]
- }
-]
-```
-
-## Use method calls
-
-This section describes the method calls you can use to configure OPC Publisher.
-
-### Configure using OPC UA method calls
-
-OPC Publisher includes an OPC UA Server, which can be accessed on port 62222. If the hostname is **publisher**, then the endpoint URI is: `opc.tcp://publisher:62222/UA/Publisher`.
-
-This endpoint exposes the following four methods:
--- PublishNode-- UnpublishNode-- GetPublishedNodes-- IoT HubDirectMethod-
-### Configure using IoT Hub direct method calls
-
-OPC Publisher implements the following IoT Hub direct method calls:
--- PublishNodes-- UnpublishNodes-- UnpublishAllNodes-- GetConfiguredEndpoints-- GetConfiguredNodesOnEndpoint-- GetDiagnosticInfo-- GetDiagnosticLog-- GetDiagnosticStartupLog-- ExitApplication-- GetInfo-
-The format of the JSON payload of the method request and responses are defined in [opcpublisher/HubMethodModel.cs](https://github.com/Azure/iot-edge-opc-publisher/tree/master/opcpublisher).
-
-If you call an unknown method on the module, it responds with a string that says the method isn't implemented. You can call an unknown method as a way to ping the module.
-
-### Configure username and password for authentication
-
-The authentication mode can be set through an IoT Hub direct method calls. The payload must contain the property **OpcAuthenticationMode** and the username and password:
-
-```csharp
-{
- "EndpointUrl": "<Url of the endpoint to set authentication settings>",
- "OpcAuthenticationMode": "UsernamePassword",
- "Username": "<Username>",
- "Password": "<Password>"
- ...
-}
-```
-
-The password is encrypted by the IoT Hub Workload Client and stored in the publisher's configuration. To change authentication back to anonymous, use the method with the following payload:
-
-```csharp
-{
- "EndpointUrl": "<Url of the endpoint to set authentication settings>",
- "OpcAuthenticationMode": "Anonymous"
- ...
-}
-```
-
-If the **OpcAuthenticationMode** property isn't set in the payload, the authentication settings remain unchanged in the configuration.
-
-## Configure telemetry publishing
-
-When OPC Publisher receives a notification of a value change in a published node, it generates a JSON formatted message that's sent to IoT Hub.
-
-You can configure the content of this JSON formatted message using a configuration file. If no configuration file is specified with the `--tc` option, a default configuration is used that's compatible with the [Connected factory solution accelerator](https://github.com/Azure/azure-iot-connected-factory).
-
-If OPC Publisher is configured to batch messages, then they're sent as a valid JSON array.
-
-The telemetry is derived from the following sources:
--- The OPC Publisher node configuration for the node-- The **MonitoredItem** object of the OPC UA stack for which OPC Publisher got a notification.-- The argument passed to this notification, which provides details on the data value change.-
-The telemetry that's put into the JSON formatted message is a selection of important properties of these objects. If you need more properties, you need to change the OPC Publisher code base.
-
-The syntax of the configuration file is as follows:
-
-```json
-// The configuration settings file consists of two objects:
-// 1) The 'Defaults' object, which defines defaults for the telemetry configuration
-// 2) An array 'EndpointSpecific' of endpoint specific configuration
-// Both objects are optional and if they are not specified, then publisher uses
-// its internal default configuration, which generates telemetry messages compatible
-// with the Microsoft Connected factory Preconfigured Solution (https://github.com/Azure/azure-iot-connected-factory).
-
-// A JSON telemetry message for Connected factory looks like:
-// {
-// "NodeId": "i=2058",
-// "ApplicationUri": "urn:myopcserver",
-// "DisplayName": "CurrentTime",
-// "Value": {
-// "Value": "10.11.2017 14:03:17",
-// "SourceTimestamp": "2017-11-10T14:03:17Z"
-// }
-// }
-
-// The 'Defaults' object in the sample below, are similar to what publisher is
-// using as its internal default telemetry configuration.
-{
- "Defaults": {
- // The first two properties ('EndpointUrl' and 'NodeId' are configuring data
- // taken from the OpcPublisher node configuration.
- "EndpointUrl": {
-
- // The following three properties can be used to configure the 'EndpointUrl'
- // property in the JSON message send by publisher to IoT Hub.
-
- // Publish controls if the property should be part of the JSON message at all.
- "Publish": false,
-
- // Pattern is a regular expression, which is applied to the actual value of the
- // property (here 'EndpointUrl').
- // If this key is omitted (which is the default), then no regex matching is done
- // at all, which improves performance.
- // If the key is used you need to define groups in the regular expression.
- // Publisher applies the regular expression and then concatenates all groups
- // found and use the resulting string as the value in the JSON message to
- //sent to IoT Hub.
- // This example mimics the default behaviour and defines a group,
- // which matches the conplete value:
- "Pattern": "(.*)",
- // Here some more exaples for 'Pattern' values and the generated result:
- // "Pattern": "i=(.*)"
- // defined for Defaults.NodeId.Pattern, will generate for the above sample
- // a 'NodeId' value of '2058'to be sent by publisher
- // "Pattern": "(i)=(.*)"
- // defined for Defaults.NodeId.Pattern, will generate for the above sample
- // a 'NodeId' value of 'i2058' to be sent by publisher
-
- // Name allows you to use a shorter string as property name in the JSON message
- // sent by publisher. By default the property name is unchanged and will be
- // here 'EndpointUrl'.
- // The 'Name' property can only be set in the 'Defaults' object to ensure
- // all messages from publisher sent to IoT Hub have a similar layout.
- "Name": "EndpointUrl"
-
- },
- "NodeId": {
- "Publish": true,
-
- // If you set Defaults.NodeId.Name to "ni", then the "NodeId" key/value pair
- // (from the above example) will change to:
- // "ni": "i=2058",
- "Name": "NodeId"
- },
-
- // The MonitoredItem object is configuring the data taken from the MonitoredItem
- // OPC UA object for published nodes.
- "MonitoredItem": {
-
- // If you set the Defaults.MonitoredItem.Flat to 'false', then a
- // 'MonitoredItem' object will appear, which contains 'ApplicationUri'
- // and 'DisplayNode' proerties:
- // "NodeId": "i=2058",
- // "MonitoredItem": {
- // "ApplicationUri": "urn:myopcserver",
- // "DisplayName": "CurrentTime",
- // }
- // The 'Flat' property can only be used in the 'MonitoredItem' and
- // 'Value' objects of the 'Defaults' object and will be used
- // for all JSON messages sent by publisher.
- "Flat": true,
-
- "ApplicationUri": {
- "Publish": true,
- "Name": "ApplicationUri"
- },
- "DisplayName": {
- "Publish": true,
- "Name": "DisplayName"
- }
- },
- // The Value object is configuring the properties taken from the event object
- // the OPC UA stack provided in the value change notification event.
- "Value": {
- // If you set the Defaults.Value.Flat to 'true', then the 'Value'
- // object will disappear completely and the 'Value' and 'SourceTimestamp'
- // members won't be nested:
- // "DisplayName": "CurrentTime",
- // "Value": "10.11.2017 14:03:17",
- // "SourceTimestamp": "2017-11-10T14:03:17Z"
- // The 'Flat' property can only be used for the 'MonitoredItem' and 'Value'
- // objects of the 'Defaults' object and will be used for all
- // messages sent by publisher.
- "Flat": false,
-
- "Value": {
- "Publish": true,
- "Name": "Value"
- },
- "SourceTimestamp": {
- "Publish": true,
- "Name": "SourceTimestamp"
- },
- // 'StatusCode' is the 32 bit OPC UA status code
- "StatusCode": {
- "Publish": false,
- "Name": "StatusCode"
- // 'Pattern' is ignored for the 'StatusCode' value
- },
- // 'Status' is the symbolic name of 'StatusCode'
- "Status": {
- "Publish": false,
- "Name": "Status"
- }
- }
- },
-
- // The next object allows to configure 'Publish' and 'Pattern' for specific
- // endpoint URLs. Those will overwrite the ones specified in the 'Defaults' object
- // or the defaults used by publisher.
- // It is not allowed to specify 'Name' and 'Flat' properties in this object.
- "EndpointSpecific": [
- // The following shows how a endpoint specific configuration can look like:
- {
- // 'ForEndpointUrl' allows to configure for which OPC UA server this
- // object applies and is a required property for all objects in the
- // 'EndpointSpecific' array.
- // The value of 'ForEndpointUrl' must be an 'EndpointUrl' configured in
- // the publishednodes.json confguration file.
- "ForEndpointUrl": "opc.tcp://<your_opcua_server>:<your_opcua_server_port>/<your_opcua_server_path>",
- "EndpointUrl": {
- // We overwrite the default behaviour and publish the
- // endpoint URL in this case.
- "Publish": true,
- // We are only interested in the URL part following the 'opc.tcp://' prefix
- // and define a group matching this.
- "Pattern": "opc.tcp://(.*)"
- },
- "NodeId": {
- // We are not interested in the configured 'NodeId' value,
- // so we do not publish it.
- "Publish": false
- // No 'Pattern' key is specified here, so the 'NodeId' value will be
- // taken as specified in the publishednodes configuration file.
- },
- "MonitoredItem": {
- "ApplicationUri": {
- // We already publish the endpoint URL, so we do not want
- // the ApplicationUri of the MonitoredItem to be published.
- "Publish": false
- },
- "DisplayName": {
- "Publish": true
- }
- },
- "Value": {
- "Value": {
- // The value of the node is important for us, everything else we
- // are not interested in to keep the data ingest as small as possible.
- "Publish": true
- },
- "SourceTimestamp": {
- "Publish": false
- },
- "StatusCode": {
- "Publish": false
- },
- "Status": {
- "Publish": false
- }
- }
- }
- ]
-}
-```
-
-## Next steps
-
-Now you've learned how to configure OPC Publisher, the suggested next step is to learn how to [Run OPC Publisher](howto-opc-publisher-run.md).
iot-accelerators Howto Opc Publisher Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-run.md
- Title: Run OPC Publisher - Azure | Microsoft Docs
-description: This article describes how to run and debug OPC Publisher. It also addresses performance and memory considerations.
-- Previously updated : 06/10/2019-------
-# Run OPC Publisher
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article describes how to run ad debug OPC Publisher. It also addresses performance and memory considerations.
-
-## Command-line options
-
-Application usage is shown using the `--help` command-line option as follows:
-
-```sh/cmd
-Current directory is: /appdata
-Log file is: <hostname>-publisher.log
-Log level is: info
-
-OPC Publisher V2.3.0
-Informational version: V2.3.0+Branch.develop_hans_methodlog.Sha.0985e54f01a0b0d7f143b1248936022ea5d749f9
-
-Usage: opcpublisher.exe <applicationname> [<IoT Hubconnectionstring>] [<options>]
-
-OPC Edge Publisher to subscribe to configured OPC UA servers and send telemetry to Azure IoT Hub.
-To exit the application, just press CTRL-C while it is running.
-
-applicationname: the OPC UA application name to use, required
- The application name is also used to register the publisher under this name in the
- IoT Hub device registry.
-
-IoT Hubconnectionstring: the IoT Hub owner connectionstring, optional
-
-There are a couple of environment variables which can be used to control the application:
-_HUB_CS: sets the IoT Hub owner connectionstring
-_GW_LOGP: sets the filename of the log file to use
-_TPC_SP: sets the path to store certificates of trusted stations
-_GW_PNFP: sets the filename of the publishing configuration file
-
-Command line arguments overrule environment variable settings.
-
-Options:
- --pf, --publishfile=VALUE
- the filename to configure the nodes to publish.
- Default: '/appdata/publishednodes.json'
- --tc, --telemetryconfigfile=VALUE
- the filename to configure the ingested telemetry
- Default: ''
- -s, --site=VALUE the site OPC Publisher is working in. if specified
- this domain is appended (delimited by a ':' to
- the 'ApplicationURI' property when telemetry is
- sent to IoT Hub.
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --ic, --iotcentral publisher will send OPC UA data in IoTCentral
- compatible format (DisplayName of a node is used
- as key, this key is the Field name in IoTCentral)
- . you need to ensure that all DisplayName's are
- unique. (Auto enables fetch display name)
- Default: False
- --sw, --sessionconnectwait=VALUE
- specify the wait time in seconds publisher is
- trying to connect to disconnected endpoints and
- starts monitoring unmonitored items
- Min: 10
- Default: 10
- --mq, --monitoreditemqueuecapacity=VALUE
- specify how many notifications of monitored items
- can be stored in the internal queue, if the data
- can not be sent quick enough to IoT Hub
- Min: 1024
- Default: 8192
- --di, --diagnosticsinterval=VALUE
- shows publisher diagnostic info at the specified
- interval in seconds (need log level info).
- -1 disables remote diagnostic log and diagnostic
- output
- 0 disables diagnostic output
- Default: 0
- --ns, --noshutdown=VALUE
- same as runforever.
- Default: False
- --rf, --runforever publisher can not be stopped by pressing a key on
- the console, but will run forever.
- Default: False
- --lf, --logfile=VALUE the filename of the logfile to use.
- Default: './<hostname>-publisher.log'
- --lt, --logflushtimespan=VALUE
- the timespan in seconds when the logfile should be
- flushed.
- Default: 00:00:30 sec
- --ll, --loglevel=VALUE the loglevel to use (allowed: fatal, error, warn,
- info, debug, verbose).
- Default: info
- --ih, --IoT Hubprotocol=VALUE
- the protocol to use for communication with IoT Hub (
- allowed values: Amqp, Http1, Amqp_WebSocket_Only,
- Amqp_Tcp_Only, Mqtt, Mqtt_WebSocket_Only, Mqtt_
- Tcp_Only) or IoT EdgeHub (allowed values: Mqtt_
- Tcp_Only, Amqp_Tcp_Only).
- Default for IoT Hub: Mqtt_WebSocket_Only
- Default for IoT EdgeHub: Amqp_Tcp_Only
- --ms, --IoT Hubmessagesize=VALUE
- the max size of a message which can be send to
- IoT Hub. when telemetry of this size is available
- it will be sent.
- 0 will enforce immediate send when telemetry is
- available
- Min: 0
- Max: 262144
- Default: 262144
- --si, --IoT Hubsendinterval=VALUE
- the interval in seconds when telemetry should be
- send to IoT Hub. If 0, then only the
- IoT Hubmessagesize parameter controls when
- telemetry is sent.
- Default: '10'
- --dc, --deviceconnectionstring=VALUE
- if publisher is not able to register itself with
- IoT Hub, you can create a device with name <
- applicationname> manually and pass in the
- connectionstring of this device.
- Default: none
- -c, --connectionstring=VALUE
- the IoT Hub owner connectionstring.
- Default: none
- --hb, --heartbeatinterval=VALUE
- the publisher is using this as default value in
- seconds for the heartbeat interval setting of
- nodes without
- a heartbeat interval setting.
- Default: 0
- --sf, --skipfirstevent=VALUE
- the publisher is using this as default value for
- the skip first event setting of nodes without
- a skip first event setting.
- Default: False
- --pn, --portnum=VALUE the server port of the publisher OPC server
- endpoint.
- Default: 62222
- --pa, --path=VALUE the enpoint URL path part of the publisher OPC
- server endpoint.
- Default: '/UA/Publisher'
- --lr, --ldsreginterval=VALUE
- the LDS(-ME) registration interval in ms. If 0,
- then the registration is disabled.
- Default: 0
- --ol, --opcmaxstringlen=VALUE
- the max length of a string opc can transmit/
- receive.
- Default: 131072
- --ot, --operationtimeout=VALUE
- the operation timeout of the publisher OPC UA
- client in ms.
- Default: 120000
- --oi, --opcsamplinginterval=VALUE
- the publisher is using this as default value in
- milliseconds to request the servers to sample
- the nodes with this interval
- this value might be revised by the OPC UA
- servers to a supported sampling interval.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a negative value will set the sampling interval
- to the publishing interval of the subscription
- this node is on.
- 0 will configure the OPC UA server to sample in
- the highest possible resolution and should be
- taken with care.
- Default: 1000
- --op, --opcpublishinginterval=VALUE
- the publisher is using this as default value in
- milliseconds for the publishing interval setting
- of the subscriptions established to the OPC UA
- servers.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a value less than or equal zero will let the
- server revise the publishing interval.
- Default: 0
- --ct, --createsessiontimeout=VALUE
- specify the timeout in seconds used when creating
- a session to an endpoint. On unsuccessful
- connection attemps a backoff up to 5 times the
- specified timeout value is used.
- Min: 1
- Default: 10
- --ki, --keepaliveinterval=VALUE
- specify the interval in seconds the publisher is
- sending keep alive messages to the OPC servers
- on the endpoints it is connected to.
- Min: 2
- Default: 2
- --kt, --keepalivethreshold=VALUE
- specify the number of keep alive packets a server
- can miss, before the session is disconneced
- Min: 1
- Default: 5
- --aa, --autoaccept the publisher trusts all servers it is
- establishing a connection to.
- Default: False
- --tm, --trustmyself=VALUE
- same as trustowncert.
- Default: False
- --to, --trustowncert the publisher certificate is put into the trusted
- certificate store automatically.
- Default: False
- --fd, --fetchdisplayname=VALUE
- same as fetchname.
- Default: False
- --fn, --fetchname enable to read the display name of a published
- node from the server. this will increase the
- runtime.
- Default: False
- --ss, --suppressedopcstatuscodes=VALUE
- specifies the OPC UA status codes for which no
- events should be generated.
- Default: BadNoCommunication,
- BadWaitingForInitialData
- --at, --appcertstoretype=VALUE
- the own application cert store type.
- (allowed values: Directory, X509Store)
- Default: 'Directory'
- --ap, --appcertstorepath=VALUE
- the path where the own application cert should be
- stored
- Default (depends on store type):
- X509Store: 'CurrentUser\UA_MachineDefault'
- Directory: 'pki/own'
- --tp, --trustedcertstorepath=VALUE
- the path of the trusted cert store
- Default: 'pki/trusted'
- --rp, --rejectedcertstorepath=VALUE
- the path of the rejected cert store
- Default 'pki/rejected'
- --ip, --issuercertstorepath=VALUE
- the path of the trusted issuer cert store
- Default 'pki/issuer'
- --csr show data to create a certificate signing request
- Default 'False'
- --ab, --applicationcertbase64=VALUE
- update/set this applications certificate with the
- certificate passed in as bas64 string
- --af, --applicationcertfile=VALUE
- update/set this applications certificate with the
- certificate file specified
- --pb, --privatekeybase64=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as base64 string
- --pk, --privatekeyfile=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as file
- --cp, --certpassword=VALUE
- the optional password for the PEM or PFX or the
- installed application certificate
- --tb, --addtrustedcertbase64=VALUE
- adds the certificate to the applications trusted
- cert store passed in as base64 string (multiple
- strings supported)
- --tf, --addtrustedcertfile=VALUE
- adds the certificate file(s) to the applications
- trusted cert store passed in as base64 string (
- multiple filenames supported)
- --ib, --addissuercertbase64=VALUE
- adds the specified issuer certificate to the
- applications trusted issuer cert store passed in
- as base64 string (multiple strings supported)
- --if, --addissuercertfile=VALUE
- adds the specified issuer certificate file(s) to
- the applications trusted issuer cert store (
- multiple filenames supported)
- --rb, --updatecrlbase64=VALUE
- update the CRL passed in as base64 string to the
- corresponding cert store (trusted or trusted
- issuer)
- --uc, --updatecrlfile=VALUE
- update the CRL passed in as file to the
- corresponding cert store (trusted or trusted
- issuer)
- --rc, --removecert=VALUE
- remove cert(s) with the given thumbprint(s) (
- multiple thumbprints supported)
- --dt, --devicecertstoretype=VALUE
- the IoT Hub device cert store type.
- (allowed values: Directory, X509Store)
- Default: X509Store
- --dp, --devicecertstorepath=VALUE
- the path of the iot device cert store
- Default Default (depends on store type):
- X509Store: 'My'
- Directory: 'CertificateStores/IoT Hub'
- -i, --install register OPC Publisher with IoT Hub and then exits.
- Default: False
- -h, --help show this message and exit
- --st, --opcstacktracemask=VALUE
- ignored, only supported for backward comaptibility.
- --sd, --shopfloordomain=VALUE
- same as site option, only there for backward
- compatibility
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --vc, --verboseconsole=VALUE
- ignored, only supported for backward comaptibility.
- --as, --autotrustservercerts=VALUE
- same as autoaccept, only supported for backward
- cmpatibility.
- Default: False
- --tt, --trustedcertstoretype=VALUE
- ignored, only supported for backward compatibility.
- the trusted cert store will always reside in a
- directory.
- --rt, --rejectedcertstoretype=VALUE
- ignored, only supported for backward compatibility.
- the rejected cert store will always reside in a
- directory.
- --it, --issuercertstoretype=VALUE
- ignored, only supported for backward compatibility.
- the trusted issuer cert store will always
- reside in a directory.
-```
-
-Typically you specify the IoT Hub owner connection string only on the first run of the application. The connection string is encrypted and stored in the platform certificate store. On later runs, the application reads the connection string from the certificate store. If you specify the connection string on each run, the device that's created for the application in the IoT Hub device registry is removed and recreated.
-
-## Run natively on Windows
-
-Open the **opcpublisher.sln** project with Visual Studio, build the solution, and publish it. You can start the application in the **Target directory** you published to as follows:
-
-```cmd
-dotnet opcpublisher.dll <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-## Use a self-built container
-
-Build your own container and start it as follows:
-
-```sh/cmd
-docker run <your-container-name> <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-## Use a container from Microsoft Container Registry
-
-There's a prebuilt container available in the Microsoft Container Registry. Start it as follows:
-
-```sh/cmd
-docker run mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-Check [Docker Hub](https://hub.docker.com/_/microsoft-iotedge-opc-publisher) to see the supported operating systems and processor architectures. If your OS and CPU architecture is supported, Docker automatically selects the correct container.
-
-## Run as an Azure IoT Edge module
-
-OPC Publisher is ready to be used as an [Azure IoT Edge](../iot-edge/index.yml) module. When you use OPC Publisher as IoT Edge module, the only supported transport protocols are **Amqp_Tcp_Only** and **Mqtt_Tcp_Only**.
-
-To add OPC Publisher as module to your IoT Edge deployment, go to your IoT Hub settings in the Azure portal and complete the following steps:
-
-1. Go to **IoT Edge** and create or select your IoT Edge device.
-1. Select **Set Modules**.
-1. Select **Add** under **Deployment Modules** and then **IoT Edge Module**.
-1. In the **Name** field, enter **publisher**.
-1. In the **Image URI** field, enter `mcr.microsoft.com/iotedge/opc-publisher:<tag>`
-1. You can find the available tags on [Docker Hub](https://hub.docker.com/_/microsoft-iotedge-opc-publisher)
-1. Paste the following JSON into the **Container Create Options** field:
-
- ```json
- {
- "Hostname": "publisher",
- "Cmd": [
- "--aa"
- ]
- }
- ```
-
- This configuration configures IoT Edge to start a container called **publisher** using the OPC Publisher image. The hostname of the container's system is set to **publisher**. OPC Publisher is called with the following command-line argument: `--aa`. With this option, OPC Publisher trusts the certificates of the OPC UA servers it connects to. You can use any OPC Publisher command-line options. The only limitation is the size of the **Container Create Options** supported by IoT Edge.
-
-1. Leave the other settings unchanged and select **Save**.
-1. If you want to process the output of the OPC Publisher locally with another IoT Edge module, go back to the **Set Modules** page. Then go to the **Specify Routes** tab, and add a new route that looks like the following JSON:
-
- ```json
- {
- "routes": {
- "processingModuleToIoT Hub": "FROM /messages/modules/processingModule/outputs/* INTO $upstream",
- "opcPublisherToProcessingModule": "FROM /messages/modules/publisher INTO BrokeredEndpoint(\"/modules/processingModule/inputs/input1\")"
- }
- }
- ```
-
-1. Back in the **Set Modules** page, select **Next**, until you reach the last page of the configuration.
-1. Select **Submit** to send your configuration to IoT Edge.
-1. When you've started IoT Edge on your edge device and the docker container **publisher** is running, you can check out the log output of OPC Publisher either by
- using `docker logs -f publisher` or by checking the logfile. In the previous example, the log file is above `d:\iiotegde\publisher-publisher.log`. You can also use the [iot-edge-opc-publisher-diagnostics tool](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics).
-
-### Make the configuration files accessible on the host
-
-To make the IoT Edge module configuration files accessible in the host file system, use the following **Container Create Options**. The following example is of a deployment using Linux Containers for Windows:
-
-```json
-{
- "Hostname": "publisher",
- "Cmd": [
- "--pf=/appdata/pn.json",
- "--aa"
- ],
- "HostConfig": {
- "Binds": [
- "d:/iiotedge:/appdata"
- ]
- }
-}
-```
-
-With these options, OPC Publisher reads the nodes it should publish from the file `./pn.json` and the container's working directory is set to `/appdata` at startup. With these settings, OPC Publisher reads the file `/appdata/pn.json` from the container to get its configuration. Without the `--pf` option, OPC Publisher tries to read the default configuration file `./publishednodes.json`.
-
-The log file, using the default name `publisher-publisher.log`, is written to `/appdata` and the `CertificateStores` directory is also created in this directory.
-
-To make all these files available in the host file system, the container configuration requires a bind mount volume. The `d://iiotedge:/appdata` bind maps the directory `/appdata`, which is the current working directory on container startup, to the host directory `d://iiotedge`. Without this option, no file data is persisted when the container next starts.
-
-If you're running Windows containers, then the syntax of the `Binds` parameter is different. At container startup, the working directory is `c:\appdata`. To put the configuration file in the directory `d:\iiotedge`on the host, specify the following mapping in the `HostConfig` section:
-
-```json
-"HostConfig": {
- "Binds": [
- "d:/iiotedge:c:/appdata"
- ]
-}
-```
-
-If you're running Linux containers on Linux, the syntax of the `Binds` parameter is again different. At container startup, the working directory is `/appdata`. To put the configuration file in the directory `/iiotedge` on the host, specify the following mapping in the `HostConfig` section:
-
-```json
-"HostConfig": {
- "Binds": [
- "/iiotedge:/appdata"
- ]
-}
-```
-
-## Considerations when using a container
-
-The following sections list some things to keep in mind when you use a container:
-
-### Access to the OPC Publisher OPC UA server
-
-By default, the OPC Publisher OPC UA server listens on port 62222. To expose this inbound port in a container, use the following command:
-
-```sh/cmd
-docker run -p 62222:62222 mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-### Enable intercontainer name resolution
-
-To enable name resolution from within the container to other containers, create a user define docker bridge network, and connect the container to this network using the `--network` option. Also assign the container a name using the `--name` option as follows:
-
-```sh/cmd
-docker network create -d bridge iot_edge
-docker run --network iot_edge --name publisher mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-The container is now reachable using the name `publisher` by other containers on the same network.
-
-### Access other systems from within the container
-
-Other containers can be reached using the parameters described in the previous section. If operating system on which Docker is hosted is DNS enabled, then accessing all systems that are known to DNS works.
-
-In networks that use NetBIOS name resolution, enable access to other systems by starting your container with the `--add-host` option. This option effectively adds an entry to the container's host file:
-
-```cmd/sh
-docker run --add-host mydevbox:192.168.178.23 mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-### Assign a hostname
-
-OPC Publisher uses the hostname of the machine it's running on for certificate and endpoint generation. Docker chooses a random hostname if one isn't set by the `-h` option. The following example shows how to set the internal hostname of the container to `publisher`:
-
-```sh/cmd
-docker run -h publisher mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-### Use bind mounts (shared filesystem)
-
-Instead of using the container file system, you may choose the host file system to store configuration information and log files. To configure this option, use the `-v` option of `docker run` in the bind mount mode.
-
-## OPC UA X.509 certificates
-
-OPC UA uses X.509 certificates to authenticate the OPC UA client and server when they establish a connection and to encrypt the communication between them. OPC Publisher uses certificate stores maintained by the OPC UA stack to manage all certificates. On startup, OPC Publisher checks if there's a certificate for itself. If there's no certificate in the certificate store, and one's not one passed in on the command-line, OPC Publisher creates a self-signed certificate. For more information, see the **InitApplicationSecurityAsync** method in `OpcApplicationConfigurationSecurity.cs`.
-
-Self-signed certificates don't provide any security, as they're not signed by a trusted CA.
-
-OPC Publisher provides command-line options to:
--- Retrieve CSR information of the current application certificate used by OPC Publisher.-- Provision OPC Publisher with a CA signed certificate.-- Provision OPC Publisher with a new key pair and matching CA signed certificate.-- Add certificates to a trusted peer or trusted issuer certificate store.-- Add a CRL.-- Remove a certificate from the trusted peer or trusted issuers certificate store.-
-All these options let you pass in parameters using files or base64 encoded strings.
-
-The default store type for all certificate stores is the file system, which you can change using command-line options. Because the container doesn't provide persistent storage in its file system, you must choose a different store type. Use the Docker `-v` option to persist the certificate stores in the host file system or on a Docker volume. If you use a Docker volume, you can pass in certificates using base64 encoded strings.
-
-The runtime environment affects how certificates are persisted. Avoid creating new certificate stores each time you run the application:
--- Running natively on Windows, you can't use an application certificate store of type `Directory` because access to the private key fails. In this case, use the option `--at X509Store`.-- Running as Linux docker container, you can map the certificate stores to the host file system with the docker run option `-v <hostdirectory>:/appdata`. This option makes the certificate persistent across application runs.-- Running as Linux docker container and you want to use an X509 store for the application certificate, use the docker run option `-v x509certstores:/root/.dotnet/corefx/cryptography/x509stores` and the application option `--at X509Store`-
-## Performance and memory considerations
-
-This section discusses options for managing memory and performance:
-
-### Command-line parameters to control performance and memory
-
-When you run OPC Publisher, you need to be aware of your performance requirements and the memory resources available on your host.
-
-Memory and performance are interdependent and both depend on the configuration of how many nodes you configure to publish. Ensure that the following parameters meet your requirements:
--- IoT Hub sends interval: `--si`-- IoT Hub message size (default `1`): `--ms`-- Monitored items queue capacity: `--mq`-
-The `--mq` parameter controls the upper bound of the capacity of the internal queue, which buffers all OPC node value change notifications. If OPC Publisher can't send messages to IoT Hub fast enough, this queue buffers the notifications. The parameter sets the number of notifications that can be buffered. If you see the number of items in this queue increasing in your test runs, then to avoid losing messages you should:
--- Reduce the IoT Hub send interval-- Increase the IoT Hub message size-
-The `--si` parameter forces OPC Publisher to send messages to IoT Hub at the specified interval. OPC Publisher sends a message as soon as the message size specified by the `--ms` parameter is reached, or as soon as the interval specified by the `--si` parameter is reached. To disable the message size option, use `--ms 0`. In this case, OPC Publisher uses the largest possible IoT Hub message size of 256 kB to batch data.
-
-The `--ms` parameter lets you batch messages sent to IoT Hub. The protocol you're using determines whether the overhead of sending a message to IoT Hub is high compared to the actual time of sending the payload. If your scenario allows for latency when data ingested by IoT Hub, configure OPC Publisher to use the largest message size of 256 kB.
-
-Before you use OPC Publisher in production scenarios, test the performance and memory usage under production conditions. You can use the `--di` parameter to specify the interval, in seconds, that OPC Publisher writes diagnostic information.
-
-### Test measurements
-
-The following example diagnostics show measurements with different values for `--si` and `--ms` parameters publishing 500 nodes with an OPC publishing interval of 1 second. The test used an OPC Publisher debug build on Windows 10 natively for 120 seconds. The IoT Hub protocol was the default MQTT protocol.
-
-#### Default configuration (--si 10 --ms 262144)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:33:05 (started @ 26.10.2017 15:31:09)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 0
-monitored item notifications enqueued: 54363
-monitored item notifications enqueue failure: 0
-monitored item notifications dequeued: 54363
-
-messages sent to IoT Hub: 109
-last successful msg sent @: 26.10.2017 15:33:04
-bytes sent to IoT Hub: 12709429
-avg msg size: 116600
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 90
si setting: 10ms setting: 262144ih setting: Mqtt
-==========================================================================
-```
-
-The default configuration sends data to IoT Hub every 10 seconds, or when 256 kB of data is available for IoT Hub to ingest. This configuration adds a moderate latency of about 10 seconds, but has lowest probability of losing data because of the large message size. The diagnostics output shows there are no lost OPC node updates: `monitored item notifications enqueue failure: 0`.
-
-#### Constant send interval (--si 1 --ms 0)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:35:59 (started @ 26.10.2017 15:34:03)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 0
-monitored item notifications enqueued: 54243
-monitored item notifications enqueue failure: 0
-monitored item notifications dequeued: 54243
-
-messages sent to IoT Hub: 109
-last successful msg sent @: 26.10.2017 15:35:59
-bytes sent to IoT Hub: 12683836
-avg msg size: 116365
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 90
si setting: 1ms setting: 0ih setting: Mqtt
-==========================================================================
-```
-
-When the message size is set to 0 then OPC Publisher internally batches data using the largest supported IoT Hub message size, which is 256 kB. The diagnostic output shows
-the average message size is 115,019 bytes. In this configuration OPC Publisher doesn't lose any OPC node value updates, and compared to the default it has lower latency.
-
-### Send each OPC node value update (--si 0 --ms 0)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:39:33 (started @ 26.10.2017 15:37:37)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 8184
-monitored item notifications enqueued: 54232
-monitored item notifications enqueue failure: 44624
-monitored item notifications dequeued: 1424
-
-messages sent to IoT Hub: 1423
-last successful msg sent @: 26.10.2017 15:39:33
-bytes sent to IoT Hub: 333046
-avg msg size: 234
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 96
si setting: 0ms setting: 0ih setting: Mqtt
-==========================================================================
-```
-
-This configuration sends for each OPC node value change a message to IoT Hub. The diagnostics show the average message size is 234 bytes, which is small. The advantage of this configuration is that OPC Publisher doesn't add any latency. The number of
-lost OPC node value updates (`monitored item notifications enqueue failure: 44624`) is high, which make this configuration unsuitable for scenarios with high volumes of telemetry to be published.
-
-### Maximum batching (--si 0 --ms 262144)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:42:55 (started @ 26.10.2017 15:41:00)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 0
-monitored item notifications enqueued: 54137
-monitored item notifications enqueue failure: 0
-monitored item notifications dequeued: 54137
-
-messages sent to IoT Hub: 48
-last successful msg sent @: 26.10.2017 15:42:55
-bytes sent to IoT Hub: 12565544
-avg msg size: 261782
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 90
si setting: 0ms setting: 262144ih setting: Mqtt
-==========================================================================
-```
-
-This configuration batches as many OPC node value updates as possible. The maximum IoT Hub message size is 256 kB, which is configured here. There's no send interval requested, which means the amount of data for IoT Hub to ingest determines the latency. This configuration has the least probability of losing any OPC node values and is suitable for publishing a high number of nodes. When you use this configuration, ensure your scenario doesn't have conditions where high latency is introduced if the message size of 256 kB isn't reached.
-
-## Debug the application
-
-To debug the application, open the **opcpublisher.sln** solution file with Visual Studio and use the Visual Studio debugging tools.
-
-If you need to access the OPC UA server in the OPC Publisher, make sure that your firewall allows access to the port the server listens on. The default port is: 62222.
-
-## Control the application remotely
-
-Configuring the nodes to publish can be done using IoT Hub direct methods.
-
-OPC Publisher implements a few additional IoT Hub direct method calls to read:
--- General information.-- Diagnostic information on OPC sessions, subscriptions, and monitored items.-- Diagnostic information on IoT Hub messages and events.-- The startup log.-- The last 100 lines of the log.-- Shut down the application.-
-The following GitHub repositories contain tools to [configure the nodes to publish](https://github.com/Azure-Samples/iot-edge-opc-publisher-nodeconfiguration) and [read the diagnostic information](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics). Both tools are also available as containers in Docker Hub.
-
-## Use a sample OPC UA server
-
-If you don't have a real OPC UA server, you can use the [sample OPC UA PLC](https://github.com/Azure-Samples/iot-edge-opc-plc) to get started. This sample PLC is also available on Docker Hub.
-
-It implements a number of tags, which generate random data and tags with anomalies. You can extend the sample if you need to simulate additional tag values.
-
-## Next steps
-
-Now that you've learned how to run OPC Publisher, the recommended next steps are to learn about [OPC Twin](overview-opc-twin.md) and [OPC Vault](overview-opc-vault.md).
iot-accelerators Howto Opc Twin Deploy Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-dependencies.md
- Title: How to deploy OPC Twin cloud dependencies in Azure | Microsoft Docs
-description: This article describes how to deploy the OPC Twin Azure dependencies needed to do local development and debugging.
-- Previously updated : 11/26/2018------
-# Deploying dependencies for local development
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to deploy only the Azure Platform Services needed to do local development and debugging. At the end, you will have a resource group deployed that contains everything you need for local development and debugging.
-
-## Deploy Azure platform services
-
-1. Make sure you have PowerShell and [AzureRM PowerShell](/powershell/azure/azurerm/install-azurerm-ps) extensions installed. Open a command prompt or terminal and run:
-
- ```bash
- git clone https://github.com/Azure/azure-iiot-components
- cd azure-iiot-components
- ```
-
- ```bash
- deploy -type local
- ```
-
-2. Follow the prompts to assign a name to the resource group for your deployment. The script deploys only the dependencies to this resource group in your Azure subscription, but not the micro services. The script also registers an Application in Azure AD. This is needed to support OAUTH-based authentication. Deployment can take several minutes.
-
-3. Once the script completes, you can select to save the .env file. The .env environment file is the configuration file of all services and tools you want to run on your development machine.
-
-## Troubleshooting deployment failures
-
-### Resource group name
-
-Ensure you use a short and simple resource group name. The name is used also to name resources as such it must comply with resource naming requirements.
-
-### Azure Active Directory (AD) registration
-
-The deployment script tries to register Azure AD applications in Azure AD. Depending on your rights to the selected Azure AD tenant, this might fail. There are three options:
-
-1. If you chose a Azure AD tenant from a list of tenants, restart the script and choose a different one from the list.
-2. Alternatively, deploy a private Azure AD tenant, restart the script and select to use it.
-3. Continue without Authentication. Since you are running your micro services locally, this is acceptable, but does not mimic production environments.
-
-## Next steps
-
-Now that you have successfully deployed OPC Twin services to an existing project, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Learn about how to deploy OPC Twin modules](howto-opc-twin-deploy-modules.md)
iot-accelerators Howto Opc Twin Deploy Existing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-existing.md
- Title: How to deploy an OPC Twin module to an existing Azure project | Microsoft Docs
-description: This article describes how to deploy OPC Twin to an existing project. You can also learn how to troubleshoot deployment failures.
-- Previously updated : 11/26/2018------
-# Deploy OPC Twin to an existing project
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The OPC Twin module runs on IoT Edge and provides several edge services to the OPC Twin and Registry services.
-
-The OPC Twin microservice facilitates the communication between factory operators and OPC UA server devices on the factory floor via an OPC Twin IoT Edge module. The microservice exposes OPC UA services (Browse, Read, Write, and Execute) via its REST API.
-
-The OPC UA device registry microservice provides access to registered OPC UA applications and their endpoints. Operators and administrators can register and unregister new OPC UA applications and browse the existing ones, including their endpoints. In addition to application and endpoint management, the registry service also catalogs registered OPC Twin IoT Edge modules. The service API gives you control of edge module functionality, for example, starting or stopping server discovery (scanning services), or activating new endpoint twins that can be accessed using the OPC Twin microservice.
-
-The core of the module is the Supervisor identity. The supervisor manages endpoint twin, which corresponds to OPC UA server endpoints that are activated using the corresponding OPC UA registry API. This endpoint twins translate OPC UA JSON received from the OPC Twin microservice running in the cloud into OPC UA binary messages, which are sent over a stateful secure channel to the managed endpoint. The supervisor also provides discovery services that send device discovery events to the OPC UA device onboarding service for processing, where these events result in updates to the OPC UA registry. This article shows you how to deploy the OPC Twin module to an existing project.
-
-> [!NOTE]
-> For more information on deployment details and instructions, see the GitHub [repository](https://github.com/Azure/azure-iiot-opc-twin-module).
-
-## Prerequisites
-
-Make sure you have PowerShell and [AzureRM PowerShell](/powershell/azure/azurerm/install-azurerm-ps) extensions installed. If you've not already done so, clone this GitHub repository. Run the following commands in PowerShell:
-
-```powershell
-git clone --recursive https://github.com/Azure/azure-iiot-components.git
-cd azure-iiot-components
-```
-
-## Deploy industrial IoT services to Azure
-
-1. In your PowerShell session, run:
-
- ```powershell
- set-executionpolicy -ExecutionPolicy Unrestricted -Scope Process
- .\deploy.cmd
- ```
-
-2. Follow the prompts to assign a name to the resource group of the deployment and a name to the website. The script deploys the microservices and their Azure platform dependencies into the resource group in your Azure subscription. The script also registers an Application in your Azure Active Directory (AAD) tenant to support OAUTH-based authentication. Deployment will take several minutes. An example of what you'd see once the solution is successfully deployed:
-
- ![Industrial IoT OPC Twin deploy to existing project](media/howto-opc-twin-deploy-existing/opc-twin-deploy-existing1.png)
-
- The output includes the URL of the public endpoint.
-
-3. Once the script completes successfully, select whether you want to save the `.env` file. You need the `.env` environment file if you want to connect to the cloud endpoint using tools such as the Console or deploy modules for development and debugging.
-
-## Troubleshooting deployment failures
-
-### Resource group name
-
-Ensure you use a short and simple resource group name. The name is used also to name resources as such it must comply with resource naming requirements.
-
-### Website name already in use
-
-It is possible that the name of the website is already in use. If you run into this error, you need to use a different application name.
-
-### Azure Active Directory (AAD) registration
-
-The deployment script tries to register two AAD applications in Azure Active Directory. Depending on your rights to the selected AAD tenant, the deployment might fail. There are two options:
-
-* If you chose a AAD tenant from a list of tenants, restart the script and choose a different one from the list.
-* Alternatively, deploy a private AAD tenant in another subscription, restart the script, and select to use it.
-
-> [!WARNING]
-> NEVER continue without Authentication. If you choose to do so, anyone can access your OPC Twin endpoints from the Internet unauthenticated. You can always choose the ["local" deployment option](howto-opc-twin-deploy-dependencies.md) to kick the tires.
-
-## Deploy an all-in-one industrial IoT services demo
-
-Instead of just the services and dependencies you can also deploy an all-in-one demo. The all in one demo contains three OPC UA servers, the OPC Twin module, all microservices, and a sample Web Application. It is intended for demonstration purposes.
-
-1. Make sure you have a clone of the repository (see above). Open a PowerShell prompt in the root of the repository and run:
-
- ```powershell
- set-executionpolicy -ExecutionPolicy Unrestricted -Scope Process
- .\deploy -type demo
- ```
-
-2. Follow the prompts to assign a new name to the resource group and a name to the website. Once deployed successfully, the script will display the URL of the web application endpoint.
-
-## Deployment script options
-
-The script takes the following parameters:
-
-```powershell
--type
-```
-
-The type of deployment (vm, local, demo)
-
-```powershell
--resourceGroupName
-```
-
-Can be the name of an existing or a new resource group.
-
-```powershell
--subscriptionId
-```
-
-Optional, the subscription ID where resources will be deployed.
-
-```powershell
--subscriptionName
-```
-
-Or the subscription name.
-
-```powershell
--resourceGroupLocation
-```
-
-Optional, a resource group location. If specified, will try to create a new resource group in this location.
-
-```powershell
--aadApplicationName
-```
-
-A name for the AAD application to register under.
-
-```powershell
--tenantId
-```
-
-AAD tenant to use.
-
-```powershell
--credentials
-```
-
-## Next steps
-
-Now that you've learned how to deploy OPC Twin to an existing project, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Secure communication of OPC UA Client and OPC UA PLC](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Twin Deploy Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-modules.md
- Title: How to deploy OPC Twin module for Azure from scratch | Microsoft Docs
-description: This article describes how to deploy OPC Twin from scratch using the Azure portal's IoT Edge blade and also using AZ CLI.
-- Previously updated : 11/26/2018------
-# Deploy OPC Twin module and dependencies from scratch
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The OPC Twin module runs on IoT Edge and provides several edge services to the OPC device twin and registry services.
-
-There are several options to deploy modules to your [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) Gateway, among them
--- [Deploying from Azure portal's IoT Edge blade](../iot-edge/how-to-deploy-modules-portal.md)-- [Deploying using AZ CLI](../iot-edge/how-to-deploy-cli-at-scale.md)-
-> [!NOTE]
-> For more information on deployment details and instructions, see the GitHub [repository](https://github.com/Azure/azure-iiot-components).
-
-## Deployment manifest
-
-All modules are deployed using a deployment manifest. An example manifest to deploy both [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher) and [OPC Twin](https://github.com/Azure/azure-iiot-opc-twin-module) is shown below.
-
-```json
-{
- "content": {
- "modulesContent": {
- "$edgeAgent": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "runtime": {
- "type": "docker",
- "settings": {
- "minDockerVersion": "v1.25",
- "loggingOptions": "",
- "registryCredentials": {}
- }
- },
- "systemModules": {
- "edgeAgent": {
- "type": "docker",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-agent:1.0",
- "createOptions": ""
- }
- },
- "edgeHub": {
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.0",
- "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}], \"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
- }
- }
- },
- "modules": {
- "opctwin": {
- "version": "1.0",
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/iotedge/opc-twin:latest",
- "createOptions": "{\"NetworkingConfig\": {\"EndpointsConfig\": {\"host\": {}}}, \"HostConfig\": {\"NetworkMode\": \"host\" }}"
- }
- },
- "opcpublisher": {
- "version": "2.0",
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/iotedge/opc-publisher:latest",
- "createOptions": "{\"Hostname\":\"publisher\",\"Cmd\":[\"publisher\",\"--pf=./pn.json\",\"--di=60\",\"--tm\",\"--aa\",\"--si=0\",\"--ms=0\"],\"ExposedPorts\":{\"62222/tcp\":{}},\"NetworkingConfig\":{\"EndpointsConfig\":{\"host\":{}}},\"HostConfig\":{\"NetworkMode\":\"host\",\"PortBindings\":{\"62222/tcp\":[{\"HostPort\":\"62222\"}]}}}"
- }
- }
- }
- }
- },
- "$edgeHub": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "routes": {
- "opctwinToIoTHub": "FROM /messages/modules/opctwin/* INTO $upstream",
- "opcpublisherToIoTHub": "FROM /messages/modules/opcpublisher/* INTO $upstream"
- },
- "storeAndForwardConfiguration": {
- "timeToLiveSecs": 7200
- }
- }
- }
- }
- }
-}
-```
-
-## Deploying from Azure portal
-
-The easiest way to deploy the modules to an Azure IoT Edge gateway device is through the Azure portal.
-
-### Prerequisites
-
-1. Deploy the OPC Twin [dependencies](howto-opc-twin-deploy-dependencies.md) and obtained the resulting `.env` file. Note the deployed `hub name` of the `PCS_IOTHUBREACT_HUB_NAME` variable in the resulting `.env` file.
-
-2. Register and start a [Linux](../iot-edge/how-to-install-iot-edge.md) or [Windows](../iot-edge/how-to-install-iot-edge.md) IoT Edge gateway and note its `device id`.
-
-### Deploy to an edge device
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT hub.
-
-2. Select **IoT Edge** from the left-hand menu.
-
-3. Click on the ID of the target device from the list of devices.
-
-4. Select **Set Modules**.
-
-5. In the **Deployment modules** section of the page, select **Add** and **IoT Edge Module.**
-
-6. In the **IoT Edge Custom Module** dialog use `opctwin` as name for the module, then specify the container *Image URI* as
-
- ```bash
- mcr.microsoft.com/iotedge/opc-twin:latest
- ```
-
- As *Container Create Options*, use the following JSON:
-
- ```json
- {"NetworkingConfig": {"EndpointsConfig": {"host": {}}}, "HostConfig": {"NetworkMode": "host" }}
- ```
-
- Fill out the optional fields if necessary. For more information about container create options, restart policy, and desired status see [EdgeAgent desired properties](../iot-edge/module-edgeagent-edgehub.md#edgeagent-desired-properties). For more information about the module twin see [Define or update desired properties](../iot-edge/module-composition.md#define-or-update-desired-properties).
-
-7. Select **Save** and repeat step **5**.
-
-8. In the IoT Edge Custom Module dialog, use `opcpublisher` as name for the module and the container *image URI* as
-
- ```bash
- mcr.microsoft.com/iotedge/opc-publisher:latest
- ```
-
- As *Container Create Options*, use the following JSON:
-
- ```json
- {"Hostname":"publisher","Cmd":["publisher","--pf=./pn.json","--di=60","--tm","--aa","--si=0","--ms=0"],"ExposedPorts":{"62222/tcp":{}},"HostConfig":{"PortBindings":{"62222/tcp":[{"HostPort":"62222"}] }}}
- ```
-
-9. Select **Save** and then **Next** to continue to the routes section.
-
-10. In the routes tab, paste the following
-
- ```json
- {
- "routes": {
- "opctwinToIoTHub": "FROM /messages/modules/opctwin/* INTO $upstream",
- "opcpublisherToIoTHub": "FROM /messages/modules/opcpublisher/* INTO $upstream"
- }
- }
- ```
-
- and select **Next**
-
-11. Review your deployment information and manifest. It should look like the above deployment manifest. Select **Submit**.
-
-12. Once you've deployed modules to your device, you can view all of them in the **Device details** page of the portal. This page displays the name of each deployed module, as well as useful information like the deployment status and exit code.
-
-## Deploying using Azure CLI
-
-### Prerequisites
-
-1. Install the latest version of the [Azure command line interface (AZ)](/cli/azure/) from [here](/cli/azure/install-azure-cli).
-
-### Quickstart
-
-1. Save the above deployment manifest into a `deployment.json` file.
-
-2. Use the following command to apply the configuration to an IoT Edge device:
-
- ```azurecli
- az iot edge set-modules --device-id [device id] --hub-name [hub name] --content ./deployment.json
- ```
-
- The `device id` parameter is case-sensitive. The content parameter points to the deployment manifest file that you saved.
- ![az IoT Edge set-modules output](/azure/iot-edge/media/how-to-deploy-cli/set-modules.png)
-
-3. Once you've deployed modules to your device, you can view all of them with the following command:
-
- ```azurecli
- az iot hub module-identity list --device-id [device id] --hub-name [hub name]
- ```
-
- The device ID parameter is case-sensitive. ![az iot hub module-identity list output](/azure/iot-edge/media/how-to-deploy-cli/list-modules.png)
-
-## Next steps
-
-Now that you have learned how to deploy OPC Twin from scratch, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Deploy OPC Twin to an existing project](howto-opc-twin-deploy-existing.md)
iot-accelerators Howto Opc Vault Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-deploy.md
- Title: How to deploy the OPC Vault certificate management service - Azure | Microsoft Docs
-description: How to deploy the OPC Vault certificate management service from scratch.
-- Previously updated : 08/16/2019------
-# Build and deploy the OPC Vault certificate management service
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to deploy the OPC Vault certificate management service in Azure.
-
-> [!NOTE]
-> For more information, see the GitHub [OPC Vault repository](https://github.com/Azure/azure-iiot-opc-vault-service).
-
-## Prerequisites
-
-### Install required software
-
-Currently the build and deploy operation is limited to Windows.
-The samples are all written for C# .NET Standard, which you need to build the service and samples for deployment.
-All the tools you need for .NET Standard come with the .NET Core tools. See [Get started with .NET Core](/dotnet/articles/core/getting-started).
-
-1. [Install .NET Core 2.1+][dotnet-install].
-2. [Install Docker][docker-url] (optional, only if the local Docker build is required).
-4. Install the [Azure command-line tools for PowerShell][powershell-install].
-5. Sign up for an [Azure subscription][azure-free].
-
-### Clone the repository
-
-If you haven't done so yet, clone this GitHub repository. Open a command prompt or terminal, and run the following:
-
-```bash
-git clone https://github.com/Azure/azure-iiot-opc-vault-service
-cd azure-iiot-opc-vault-service
-```
-
-Alternatively, you can clone the repo directly in Visual Studio 2017.
-
-### Build and deploy the Azure service on Windows
-
-A PowerShell script provides an easy way to deploy the OPC Vault microservice and the application.
-
-1. Open a PowerShell window at the repo root.
-3. Go to the deploy folder `cd deploy`.
-3. Choose a name for `myResourceGroup` that's unlikely to cause a conflict with other deployed webpages. See the "Website name already in use" section later in this article.
-5. Start the deployment with `.\deploy.ps1` for interactive installation, or enter a full command line:
-`.\deploy.ps1 -subscriptionName "MySubscriptionName" -resourceGroupLocation "East US" -tenantId "myTenantId" -resourceGroupName "myResourceGroup"`
-7. If you plan to develop with this deployment, add `-development 1` to enable the Swagger UI, and to deploy debug builds.
-6. Follow the instructions in the script to sign in to your subscription, and to provide additional information.
-9. After a successful build and deploy operation, you should see the following message:
- ```
- To access the web client go to:
- https://myResourceGroup.azurewebsites.net
-
- To access the web service go to:
- https://myResourceGroup-service.azurewebsites.net
-
- To start the local docker GDS server:
- .\myResourceGroup-dockergds.cmd
-
- To start the local dotnet GDS server:
- .\myResourceGroup-gds.cmd
- ```
-
- > [!NOTE]
- > In case of problems, see the "Troubleshooting deployment failures" section later in the article.
-
-8. Open your favorite browser, and open the application page: `https://myResourceGroup.azurewebsites.net`
-8. Give the web app and the OPC Vault microservice a few minutes to warm up after deployment. The web home page might stop responding on first use, for up to a minute, until you get the first responses.
-11. To take a look at the Swagger API, open: `https://myResourceGroup-service.azurewebsites.net`
-13. To start a local GDS server with dotnet, start `.\myResourceGroup-gds.cmd`. With Docker, start `.\myResourceGroup-dockergds.cmd`.
-
-It's possible to redeploy a build with exactly the same settings. Be aware that such an operation renews all application secrets, and might reset some settings in the Azure Active Directory (Azure AD) application registrations.
-
-It's also possible to redeploy just the web app binaries. With the parameter `-onlyBuild 1`, new zip packages of the service and the app are deployed to the web applications.
-
-After successful deployment, you can start using the services. See [Manage the OPC Vault certificate management service](howto-opc-vault-manage.md).
-
-## Delete the services from the subscription
-
-Here's how:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Go to the resource group in which the service was deployed.
-3. Select **Delete resource group**, and confirm.
-4. After a short while, all deployed service components are deleted.
-5. Go to **Azure Active Directory** > **App registrations**.
-6. There should be three registrations listed for each deployed resource group. The registrations have the following names:
-`resourcegroup-client`, `resourcegroup-module`, `resourcegroup-service`. Delete each registration separately.
-
-Now all deployed components are removed.
-
-## Troubleshooting deployment failures
-
-### Resource group name
-
-Use a short and simple resource group name. The name is also used to name resources and the service URL prefix. As such, it must comply with resource naming requirements.
-
-### Website name already in use
-
-It's possible that the name of the website is already in use. You need to use a different resource group name. The hostnames in use by the deployment script are: https:\//resourcegroupname.azurewebsites.net and https:\//resourgroupname-service.azurewebsites.net.
-Other names of services are built by the combination of short name hashes, and are unlikely to conflict with other services.
-
-### Azure AD registration
-
-The deployment script tries to register three Azure AD applications in Azure AD. Depending on your permissions in the selected Azure AD tenant, this operation might fail. There are two options:
--- If you chose an Azure AD tenant from a list of tenants, restart the script and choose a different one from the list.-- Alternatively, deploy a private Azure AD tenant in another subscription. Restart the script, and select to use it.-
-## Deployment script options
-
-The script takes the following parameters:
--
-```
--resourceGroupName
-```
-
-This can be the name of an existing or a new resource group.
-
-```
--subscriptionId
-```
--
-This is the subscription ID where resources will be deployed. It's optional.
-
-```
--subscriptionName
-```
--
-Alternatively, you can use the subscription name.
-
-```
--resourceGroupLocation
-```
--
-This is a resource group location. If specified, this parameter tries to create a new resource group in this location. This parameter is also optional.
--
-```
--tenantId
-```
--
-This is the Azure AD tenant to use.
-
-```
--development 0|1
-```
-
-This is to deploy for development. Use debug build, and set the ASP.NET environment to Development. Create `.publishsettings` for import in Visual Studio 2017, to allow it to deploy the app and the service directly. This parameter is also optional.
-
-```
--onlyBuild 0|1
-```
-
-This is to rebuild and to redeploy only the web apps, and to rebuild the Docker containers. This parameter is also optional.
-
-[azure-free]:https://azure.microsoft.com/free/
-[powershell-install]:https://azure.microsoft.com/downloads/#powershell
-[docker-url]: https://www.docker.com/
-[dotnet-install]: https://dotnet.microsoft.com/download
-
-## Next steps
-
-Now that you have learned how to deploy OPC Vault from scratch, you can:
-
-> [!div class="nextstepaction"]
-> [Manage OPC Vault](howto-opc-vault-manage.md)
iot-accelerators Howto Opc Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-manage.md
- Title: How to manage the OPC Vault certificate service - Azure | Microsoft Docs
-description: Manage the OPC Vault root CA certificates and user permissions.
-- Previously updated : 8/16/2019------
-# Manage the OPC Vault certificate service
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains the administrative tasks for the OPC Vault certificate management service in Azure. It includes information about how to renew Issuer CA certificates, how to renew the Certificate Revocation List (CRL), and how to grant and revoke user access.
-
-## Create or renew the root CA certificate
-
-After deploying OPC Vault, you must create the root CA certificate. Without a valid Issuer CA certificate, you can't sign or issue application certificates. Refer to [Certificates](howto-opc-vault-secure-ca.md#certificates) to manage your certificates with reasonable, secure lifetimes. Renew an Issuer CA certificate after half of its lifetime. When renewing, also consider that the configured lifetime of a newly-signed application certificate shouldn't exceed the lifetime of the Issuer CA certificate.
-> [!IMPORTANT]
-> The Administrator role is required to create or renew the Issuer CA certificate.
-
-1. Open your certificate service at `https://myResourceGroup-app.azurewebsites.net`, and sign in.
-2. Go to **Certificate Groups**.
-3. There is one default certificate group listed. Select **Edit**.
-4. In **Edit Certificate Group Details**, you can modify the subject name and lifetime of your CA and application certificates. The subject and the lifetimes should only be set once before the first CA certificate is issued. Lifetime changes during operations might result in inconsistent lifetimes of issued certificates and CRLs.
-5. Enter a valid subject (for example, `CN=My CA Root, O=MyCompany, OU=MyDepartment`).<br>
- > [!IMPORTANT]
- > If you change the subject, you must renew the Issuer certificate, or the service will fail to sign application certificates. The subject of the configuration is checked against the subject of the active Issuer certificate. If the subjects don't match, certificate signing is refused.
-6. Select **Save**.
-7. If you encounter a "forbidden" error at this point, your user credentials don't have the administrator permission to modify or create a new root certificate. By default, the user who deployed the service has administrator and signing roles with the service. Other users need to be added to the Approver, Writer or Administrator roles, as appropriate in the Azure Active Directory (Azure AD) application registration.
-8. Select **Details**. This should show the updated information.
-9. Select **Renew CA Certificate** to issue the first Issuer CA certificate, or to renew the Issuer certificate. Then select **OK**.
-10. After a few seconds, you'll see **Certificate Details**. To download the latest CA certificate and CRL for distribution to your OPC UA applications, select **Issuer** or **Crl**.
-
-Now the OPC UA certificate management service is ready to issue certificates for OPC UA applications.
-
-## Renew the CRL
-
-Renewal of the CRL is an update, which should be distributed to the applications at regular intervals. OPC UA devices, which support the CRL Distribution Point X509 extension, can directly update the CRL from the microservice endpoint. Other OPC UA devices might require manual updates, or can be updated by using GDS server push extensions (*) to update the trust lists with the certificates and CRLs.
-
-In the following workflow, all certificate requests in the deleted states are revoked in the CRLs, which correspond to the Issuer CA certificate for which they were issued. The version number of the CRL is incremented by 1. <br>
-> [!NOTE]
-> All issued CRLs are valid until the expiration of the Issuer CA certificate. This is because the OPC UA specification doesn't require a mandatory, deterministic distribution model for CRL.
-
-> [!IMPORTANT]
-> The Administrator role is required to renew the Issuer CRL.
-
-1. Open your certificate service at `https://myResourceGroup.azurewebsites.net`, and sign in.
-2. Go to the **Certificate Groups** page.
-3. Select **Details**. This should show the current certificate and CRL information.
-4. Select **Update CRL Revocation List (CRL)** to issue an updated CRL for all active Issuer certificates in the OPC Vault storage.
-5. After a few seconds, you'll see **Certificate Details**. To download the latest CA certificate and CRL for distribution to your OPC UA applications, select **Issuer** or **Crl**.
-
-## Manage user roles
-
-You manage user roles for the OPC Vault microservice in the Azure AD Enterprise Application. For a detailed description of the role definitions, see [Roles](howto-opc-vault-secure-ca.md#roles).
-
-By default, an authenticated user in the tenant can sign in the service as a Reader. Higher privileged roles require manual management in the Azure portal, or by using PowerShell.
-
-### Add user
-
-1. Open the Azure portal.
-2. Go to **Azure Active Directory** > **Enterprise applications**.
-3. Choose the registration of the OPC Vault microservice (by default, your `resourceGroupName-service`).
-4. Go to **Users and Groups**.
-5. Select **Add User**.
-6. Select or invite the user for assignment to a specific role.
-7. Select the role for the users.
-8. Select **Assign**.
-9. For users in the Administrator or Approver role, continue to add Azure Key Vault access policies.
-
-### Remove user
-
-1. Open the Azure portal.
-2. Go to **Azure Active Directory** > **Enterprise applications**.
-3. Choose the registration of the OPC Vault microservice (by default, your `resourceGroupName-service`).
-4. Go to **Users and Groups**.
-5. Select a user with a role to remove, and then select **Remove**.
-6. For removed users in the Administrator or Approver role, also remove them from Azure Key Vault policies.
-
-### Add user access policy to Azure Key Vault
-
-Additional access policies are required for Approvers and Administrators.
-
-By default, the service identity has only limited permissions to access Key Vault, to prevent elevated operations or changes to take place without user impersonation. The basic service permissions are Get and List, for both secrets and certificates. For secrets, there is only one exception: the service can delete a private key from the secret store after it's accepted by a user. All other operations require user impersonated permissions.
-
-#### For an Approver role, the following permissions must be added to Key Vault
-
-1. Open the Azure portal.
-2. Go to your OPC Vault `resourceGroupName`, used during deployment.
-3. Go to the Key Vault `resourceGroupName-xxxxx`.
-4. Go to **Access Policies**.
-5. Select **Add new**.
-6. Skip the template. There's no template that matches requirements.
-7. Choose **Select Principal**, and select the user to be added, or invite a new user to the tenant.
-8. Select the following **Key permissions**: **Get**, **List**, and **Sign**.
-9. Select the following **Secret permissions**: **Get**, **List**, **Set**, and **Delete**.
-10. Select the following **Certificate permissions**: **Get** and **List**.
-11. Select **OK**, and select **Save**.
-
-#### For an Administrator role, the following permissions must be added to Key Vault
-
-1. Open the Azure portal.
-2. Go to your OPC Vault `resourceGroupName`, used during deployment.
-3. Go to the Key Vault `resourceGroupName-xxxxx`.
-4. Go to **Access Policies**.
-5. Select **Add new**.
-6. Skip the template. There's no template that matches requirements.
-7. Choose **Select Principal**, and select the user to be added, or invite a new user to the tenant.
-8. Select the following **Key permissions**: **Get**, **List**, and **Sign**.
-9. Select the following **Secret permissions**: **Get**, **List**, **Set**, and **Delete**.
-10. Select the following **Certificate permissions**: **Get**, **List**, **Update**, **Create**, and **Import**.
-11. Select **OK**, and select **Save**.
-
-### Remove user access policy from Azure Key Vault
-
-1. Open the Azure portal.
-2. Go to your OPC Vault `resourceGroupName`, used during deployment.
-3. Go to the Key Vault `resourceGroupName-xxxxx`.
-4. Go to **Access Policies**.
-5. Find the user to remove, and select **Delete**.
-
-## Next steps
-
-Now that you have learned how to manage OPC Vault certificates and users, you can:
-
-> [!div class="nextstepaction"]
-> [Secure communication of OPC devices](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Vault Secure Ca https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-secure-ca.md
- Title: How to run the OPC Vault certificate management service securely - Azure | Microsoft Docs
-description: Describes how to run the OPC Vault certificate management service securely in Azure, and reviews other security guidelines to consider.
-- Previously updated : 8/16/2019------
-# Run the OPC Vault certificate management service securely
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to run the OPC Vault certificate management service securely in Azure, and reviews other security guidelines to consider.
-
-## Roles
-
-### Trusted and authorized roles
-
-The OPC Vault microservice allows for distinct roles to access various parts of the service.
-
-> [!IMPORTANT]
-> During deployment, the script only adds the user who runs the deployment script as a user for all roles. For a production deployment, you should review this role assignment, and reconfigure appropriately by following the guidelines below. This task requires manual assignment of roles and services in the Azure Active Directory (Azure AD) Enterprise Applications portal.
-
-### Certificate management service roles
-
-The OPC Vault microservice defines the following roles:
--- **Reader**: By default, any authenticated user in the tenant has read access.
- - Read access to applications and certificate requests. Can list and query for applications and certificate requests. Also device discovery information and public certificates are accessible with read access.
-- **Writer**: The Writer role is assigned to a user to add write permissions for certain tasks.
- - Read/Write access to applications and certificate requests. Can register, update, and unregister applications. Can create certificate requests and obtain approved private keys and certificates. Can also delete private keys.
-- **Approver**: The Approver role is assigned to a user to approve or reject certificate requests. The role doesn't include any other role.
- - In addition to the Approver role to access the OPC Vault microservice API, the user must also have the key signing permission in Azure Key Vault to be able to sign the certificates.
- - The Writer and Approver role should be assigned to different users.
- - The main role of the Approver is the approval of the generation and rejection of certificate requests.
-- **Administrator**: The Administrator role is assigned to a user to manage the certificate groups. The role doesn't support the Approver role, but includes the Writer role.
- - The administrator can manage the certificate groups, change the configuration, and revoke application certificates by issuing a new Certificate Revocation List (CRL).
- - Ideally, the Writer, Approver, and Administrator roles are assigned to different users. For additional security, a user with the Approver or Administrator role also needs key-signing permission in Key Vault, to issue certificates or to renew an Issuer CA certificate.
- - In addition to the microservice administration role, the role includes, but isn't limited to:
- - Responsibility for administering the implementation of the CAΓÇÖs security practices.
- - Management of the generation, revocation, and suspension of certificates.
- - Cryptographic key life-cycle management (for example, the renewal of the Issuer CA keys).
- - Installation, configuration, and maintenance of services that operate the CA.
- - Day-to-day operation of the services.
- - CA and database backup and recovery.
-
-### Other role assignments
-
-Also consider the following roles when you're running the service:
--- Business owner of the certificate procurement contract with the external root certification authority (for example, when the owner purchases certificates from an external CA or operates a CA that is subordinate to an external CA).-- Development and validation of the Certificate Authority.-- Review of audit records.-- Personnel that help support the CA or manage the physical and cloud facilities, but aren't directly trusted to perform CA operations, are in the *authorized* role. The set of tasks persons in the authorized role is allowed to perform must also be documented.-
-### Review memberships of trusted and authorized roles quarterly
-
-Review membership of trusted and authorized roles at least quarterly. Ensure that the set of people (for manual processes) or service identities (for automated processes) in each role is kept to a minimum.
-
-### Role separation between certificate requester and approver
-
-The certificate issuance process must enforce role separation between the certificate requester and certificate approver roles (persons or automated systems). Certificate issuance must be authorized by a certificate approver role that verifies that the certificate requestor
-is authorized to obtain certificates. The persons that hold the certificate approver role must be a formally authorized person.
-
-### Restrict assignment of privileged roles
-
-You should restrict assignment of privileged roles, such as authorizing membership of the Administrators and Approvers group, to a limited set of authorized personnel. Any privileged role changes must have access revoked within 24 hours. Finally, review privileged role assignments on a quarterly basis, and remove any unneeded or expired assignments.
-
-### Privileged roles should use two-factor authentication
-
-Use multi-factor authentication (also called two-factor authentication) for interactive sign-ins of Approvers and Administrators to the service.
-
-## Certificate service operation guidelines
-
-### Operational contacts
-
-The certificate service must have an up-to-date security response plan on file, which contains detailed operational incident response contacts.
-
-### Security updates
-
-All systems must be continuously monitored and updated with latest security updates.
-
-> [!IMPORTANT]
-> The GitHub repository of the OPC Vault service is continuously updated with security patches. Monitor these updates, and apply them to the service at regular intervals.
-
-### Security monitoring
-
-Subscribe to or implement appropriate security monitoring. For example, subscribe to a central monitoring solution (such as Azure Security Center or Microsoft 365 monitoring solution), and configure it appropriately to ensure that security events are transmitted to the monitoring solution.
-
-> [!IMPORTANT]
-> By default, the OPC Vault service is deployed with [Azure Application Insights](../azure-monitor/app/devops.md) as a monitoring solution. Adding a security solution like [Azure Security Center](https://azure.microsoft.com/services/security-center/) is highly recommended.
-
-### Assess the security of open-source software components
-
-All open-source components used within a product or service must be free of moderate or greater security vulnerabilities.
-
-> [!IMPORTANT]
-> During continuous integration builds, the GitHub repository of the OPC Vault service scans all components for vulnerabilities. Monitor these updates on GitHub, and apply them to the service at regular intervals.
-
-### Maintain an inventory
-
-Maintain an asset inventory for all production hosts (including persistent virtual machines), devices, all internal IP address ranges, VIPs, and public DNS domain names. Whenever you add or remove a system, device IP address, VIP, or public DNS domain, you must update the inventory within 30 days.
-
-#### Inventory of the default Azure OPC Vault microservice production deployment
-
-In Azure:
-- **App Service Plan**: App service plan for service hosts. Default S1.-- **App Service** for microservice: The OPC Vault service host.-- **App Service** for sample application: The OPC Vault sample application host.-- **Key Vault Standard**: To store secrets and Azure Cosmos DB keys for the web services.-- **Key Vault Premium**: To host the Issuer CA keys, for signing service, and for vault configuration and storage of application private keys.-- **Azure Cosmos DB**: Database for application and certificate requests. -- **Application Insights**: (optional) Monitoring solution for web service and application.-- **Azure AD Application Registration**: A registration for the sample application, the service, and the edge module.-
-For the cloud services, all hostnames, resource groups, resource names, subscription IDs, and tenant IDs used to deploy the service should be documented.
-
-In Azure IoT Edge or a local IoT Edge server:
-- **OPC Vault IoT Edge module**: To support a factory network OPC UA Global Discovery Server. -
-For the IoT Edge devices, the hostnames and IP addresses should be documented.
-
-### Document the Certification Authorities (CAs)
-
-The CA hierarchy documentation must contain all operated CAs. This includes all related
-subordinate CAs, parent CAs, and root CAs, even when they aren't managed by the service.
-Instead of formal documentation, you can provide an exhaustive set of all non-expired CA certificates.
-
-> [!NOTE]
-> The OPC Vault sample application supports the download of all certificates used and produced in the service for documentation.
-
-### Document the issued certificates by all Certification Authorities (CAs)
-
-Provide an exhaustive set of all certificates issued in the past 12 months.
-
-> [!NOTE]
-> The OPC Vault sample application supports the download of all certificates used and produced in the service for documentation.
-
-### Document the standard operating procedure for securely deleting cryptographic keys
-
-During the lifetime of a CA, key deletion might happen only rarely. This is why no user has Key Vault Certificate Delete right assigned, and why there are no APIs exposed to delete an Issuer CA certificate. The manual standard operating procedure for securely deleting certification authority cryptographic keys is only available by directly accessing Key Vault in the Azure portal. You can also delete the certificate group in Key Vault. To ensure immediate deletion, disable the
-[Key Vault soft delete](../key-vault/general/soft-delete-overview.md) functionality.
-
-## Certificates
-
-### Certificates must comply with minimum certificate profile
-
-The OPC Vault service is an online CA that issues end entity certificates to subscribers. The OPC Vault microservice follows these guidelines in the default implementation.
--- All certificates must include the following X.509 fields, as specified below:
- - The content of the version field must be v3.
- - The contents of the serialNumber field must include at least 8 bytes of entropy obtained from a FIPS (Federal Information Processing Standards) 140 approved random number generator.<br>
- > [!IMPORTANT]
- > The OPC Vault serial number is by default 20 bytes, and is obtained from the operating system cryptographic random number generator. The random number generator is FIPS 140 approved on Windows devices, but not on Linux. Consider this when choosing a service deployment that uses Linux VMs or Linux docker containers, on which the underlying technology OpenSSL isn't FIPS 140 approved.
- - The issuerUniqueID and subjectUniqueID fields must not be present.
- - End-entity certificates must be identified with the basic constraints extension, in accordance with IETF RFC 5280.
- - The pathLenConstraint field must be set to 0 for the Issuing CA certificate.
- - The Extended Key Usage extension must be present, and must contain the minimum set of Extended Key Usage object identifiers (OIDs). The anyExtendedKeyUsage OID (2.5.29.37.0) must not be specified.
- - The CRL Distribution Point (CDP) extension must be present in the Issuer CA certificate.<br>
- > [!IMPORTANT]
- > The CDP extension is present in OPC Vault CA certificates. Nevertheless, OPC UA devices use custom methods to distribute CRLs.
- - The Authority Information Access extension must be present in the subscriber certificates.<br>
- > [!IMPORTANT]
- > The Authority Information Access extension is present in OPC Vault subscriber certificates. Nevertheless, OPC UA devices use custom methods to distribute Issuer CA information.
-- Approved asymmetric algorithms, key lengths, hash functions and padding modes must be used.
- - RSA and SHA-2 are the only supported algorithms.
- - RSA can be used for encryption, key exchange, and signature.
- - RSA encryption must use only the OAEP, RSA-KEM, or RSA-PSS padding modes.
- - Key lengths greater than or equal to 2048 bits are required.
- - Use the SHA-2 family of hash algorithms (SHA256, SHA384, and SHA512).
- - RSA Root CA keys with a typical lifetime greater than or equal to 20 years must be 4096 bits or greater.
- - RSA Issuer CA keys must be at least 2048 bits. If the CA certificate expiration date is after 2030, the CA key must be 4096 bits or greater.
-- Certificate lifetime
- - Root CA certificates: The maximum certificate validity period for root CAs must not exceed 25 years.
- - Sub CA or online Issuer CA certificates: The maximum certificate validity period for CAs that are online and issue only subscriber certificates must not exceed 6 years. For these CAs, the related private signature key must not be used longer than 3 years to issue new certificates.<br>
- > [!IMPORTANT]
- > The Issuer certificate, as it is generated in the default OPC Vault microservice without external Root CA, is treated like an online Sub CA, with respective requirements and lifetimes. The default lifetime is set to 5 years, with a key length greater than or equal to 2048.
- - All asymmetric keys must have a maximum 5-year lifetime, and a recommended 1-year lifetime.<br>
- > [!IMPORTANT]
- > By default, the lifetimes of application certificates issued with OPC Vault have a lifetime of 2 years, and should be replaced every year.
- - Whenever a certificate is renewed, it's renewed with a new key.
-- OPC UA-specific extensions in application instance certificates
- - The subjectAltName extension includes the application Uri and hostnames. These might also include FQDN, IPv4, and IPv6 addresses.
- - The keyUsage includes digitalSignature, nonRepudiation, keyEncipherment, and dataEncipherment.
- - The extendedKeyUsage includes serverAuth and clientAuth.
- - The authorityKeyIdentifier is specified in signed certificates.
-
-### CA keys and certificates must meet minimum requirements
--- **Private keys**: RSA keys must be at least 2048 bits. If the CA certificate expiration date is after 2030, the CA key must be 4096 bits or greater.-- **Lifetime**: The maximum certificate validity period for CAs that are online and issue only subscriber certificates must not exceed 6 years. For these CAs, the related private signature key must not be used longer than 3 years to issue new certificates.-
-### CA keys are protected using Hardware Security Modules
-
-OpcVault uses Azure Key Vault Premium, and keys are protected by FIPS 140-2 Level 2 Hardware Security Modules (HSM).
-
-The cryptographic modules that Key Vault uses, whether HSM or software, are FIPS validated. Keys created or imported as HSM-protected are processed inside an HSM, validated to FIPS 140-2 Level 2. Keys created or imported as software-protected are processed inside cryptographic modules validated to FIPS 140-2 Level 1.
-
-## Operational practices
-
-### Document and maintain standard operational PKI practices for certificate enrollment
-
-Document and maintain standard operational procedures (SOPs) for how CAs issue certificates, including:
-- How the subscriber is identified and authenticated. -- How the certificate request is processed and validated (if applicable, include also how certificate renewal and rekey requests are processed). -- How issued certificates are distributed to the subscribers. -
-The OPC Vault microservice SOP is described in [OPC Vault architecture](overview-opc-vault-architecture.md) and [Manage the OPC Vault certificate service](howto-opc-vault-manage.md). The practices follow "OPC Unified Architecture Specification Part 12: Discovery and Global Services."
--
-### Document and maintain standard operational PKI practices for certificate revocation
-
-The certificate revocation process is described in [OPC Vault architecture](overview-opc-vault-architecture.md) and [Manage the OPC Vault certificate service](howto-opc-vault-manage.md).
-
-### Document CA key generation ceremony
-
-The Issuer CA key generation in the OPC Vault microservice is simplified, due to the secure storage in Azure Key Vault. For more information, see [Manage the OPC Vault certificate service](howto-opc-vault-manage.md).
-
-However, when you're using an external Root certification authority, a CA key generation ceremony must adhere to the following requirements.
-
-The CA key generation ceremony must be performed against a documented script that includes at least the following items:
-- Definition of roles and participant responsibilities.-- Approval for conduct of the CA key generation ceremony.-- Cryptographic hardware and activation materials required for the ceremony.-- Hardware preparation (including asset/configuration information update and sign-off).-- Operating system installation.-- Specific steps performed during the CA key generation ceremony, such as:
- - CA application installation and configuration.
- - CA key generation.
- - CA key backup.
- - CA certificate signing.
- - Import of signed keys in the protected HSM of the service.
- - CA system shutdown.
- - Preparation of materials for storage.
--
-## Next steps
-
-Now that you have learned how to securely manage OPC Vault, you can:
-
-> [!div class="nextstepaction"]
-> [Secure OPC UA devices with OPC Vault](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Vault Secure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-secure.md
- Title: Secure the communication of OPC UA devices with OPC Vault - Azure | Microsoft Docs
-description: How to register OPC UA applications, and how to issue signed application certificates for your OPC UA devices with OPC Vault.
-- Previously updated : 8/16/2018------
-# Use the OPC Vault certificate management service
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to register applications, and how to issue signed application certificates for your OPC UA devices.
-
-## Prerequisites
-
-### Deploy the certificate management service
-
-First, deploy the service to the Azure cloud. For details, see [Deploy the OPC Vault certificate management service](howto-opc-vault-deploy.md).
-
-### Create the Issuer CA certificate
-
-If you haven't done so yet, create the Issuer CA certificate. For details, see [Create and manage the Issuer certificate for OPC Vault](howto-opc-vault-manage.md).
-
-## Secure OPC UA applications
-
-### Step 1: Register your OPC UA application
-
-> [!IMPORTANT]
-> The Writer role is required to register an application.
-
-1. Open your certificate service at `https://myResourceGroup-app.azurewebsites.net`, and sign in.
-2. Go to **Register New**. For an application registration, a user needs to have at least the Writer role assigned.
-2. The entry form follows naming conventions in OPC UA. For example, in the following screenshot, the settings for the [OPC UA Reference Server](https://github.com/OPCFoundation/UA-.NETStandard/tree/master/Applications/ReferenceServer) sample in the OPC UA .NET Standard stack is shown:
-
- ![Screenshot of UA Reference Server Registration](media/howto-opc-vault-secure/reference-server-registration.png "UA Reference Server Registration")
-
-5. Select **Register** to register the application in the certificate service application database. The workflow directly guides the user to the next step to request a signed certificate for the application.
-
-### Step 2: Secure your application with a CA signed application certificate
-
-Secure your OPC UA application by issuing a signed certificate based on a Certificate Signing
-Request (CSR). Alternatively, you can request a new key pair, which includes a new private key in PFX or PEM format. For information about which method is supported for your application, see the documentation of your OPC UA device. In general, the CSR method is recommended, because it doesn't require a private key to be transferred over a wire.
-
-#### Request a new certificate with a new keypair
-
-1. Go to **Applications**.
-3. Select **New Request** for a listed application.
-
- ![Screenshot of Request New Certificate](media/howto-opc-vault-secure/request-new-certificate.png "Request New Certificate")
-
-3. Select **Request new KeyPair and Certificate** to request a private key and a new signed certificate with the public key for your application.
-
- ![Screenshot of Generate a New KeyPair and Certificate](media/howto-opc-vault-secure/generate-new-key-pair.png "Generate New Key Pair")
-
-4. Fill in the form with a subject and the domain names. For the private key, choose PEM or PFX with password. Select **Generate New KeyPair** to create the certificate request.
-
- ![Screenshot that shows the View Certificate Request Details screen and the Generate New KeyPair button.](media/howto-opc-vault-secure/approve-reject.png "Approve Certificate")
-
-5. Approval requires a user with the Approver role, and with signing permissions in Azure Key Vault. In the typical workflow, the Approver and Requester roles should be assigned to different users. Select **Approve** or **Reject** to start or cancel the actual creation of the key pair and the signing operation. The new key pair is created and stored securely in Azure Key Vault, until downloaded by the certificate requester. The resulting certificate with public key is signed by the CA. These operations can take a few seconds to finish.
-
- ![Screenshot of View Certificate Request Details, with approval message at bottom](media/howto-opc-vault-secure/view-key-pair.png "View Key Pair")
-
-7. The resulting private key (PFX or PEM) and certificate (DER) can be downloaded from here in the format selected as binary file download. A base64 encoded version is also available, for example, to copy and paste the certificate to a command line or text entry.
-8. After the private key is downloaded and stored securely, you can select **Delete Private Key**. The certificate with the public key remains available for future use.
-9. Due to the use of a CA signed certificate, the CA cert and Certificate Revocation List (CRL) should be downloaded here as well.
-
-Now it depends on the OPC UA device how to apply the new key pair. Typically, the CA cert and CRL are copied to a `trusted` folder, while the public and private keys of the application certificate are applied to an `own` folder in the certificate store. Some devices might already support server push for certificate updates. Refer to the documentation of your OPC UA device.
-
-#### Request a new certificate with a CSR
-
-1. Go to **Applications**.
-3. Select **New Request** for a listed application.
-
- ![Screenshot of Request New Certificate](media/howto-opc-vault-secure/request-new-certificate.png "Request New Certificate")
-
-3. Select **Request new Certificate with Signing Request** to request a new signed certificate for your application.
-
- ![Screenshot of Generate a new Certificate](media/howto-opc-vault-secure/generate-new-certificate.png "Generate New Certificate")
-
-4. Upload CSR by selecting a local file or by pasting a base64 encoded CSR in the form. Select **Generate New Certificate**.
-
- ![Screenshot of View Certificate Request Details](media/howto-opc-vault-secure/approve-reject-csr.png "Approve CSR")
-
-5. Approval requires a user with the Approver role, and with signing permissions in Azure Key Vault. Select **Approve** or **Reject** to start or cancel the actual signing operation. The resulting certificate with public key is signed by the CA. This operation can take a few seconds to finish.
-
- ![Screenshot that shows the View Certificate Request Details and includes an approval message at bottom.](media/howto-opc-vault-secure/view-cert-csr.png "View Certificate")
-
-6. The resulting certificate (DER) can be downloaded from here as binary file. A base64 encoded version is also available, for example, to copy and paste the certificate to a command line or text entry.
-10. After the certificate is downloaded and stored securely, you can select **Delete Certificate**.
-11. Due to the use of a CA signed certificate, the CA cert and CRL should be downloaded here as well.
-
-Now it depends on the OPC UA device how to apply the new certificate. Typically, the CA cert and CRL are copied to a `trusted` folder, while the application certificate is applied to an `own` folder in the certificate store. Some devices might already support server push for certificate updates. Refer to the documentation of your OPC UA device.
-
-### Step 3: Device secured
-
-The OPC UA device is now ready to communicate with other OPC UA devices secured by CA signed certificates, without further configuration.
-
-## Next steps
-
-Now that you have learned how to secure OPC UA devices, you can:
-
-> [!div class="nextstepaction"]
-> [Run a secure certificate management service](howto-opc-vault-secure-ca.md)
iot-accelerators Overview Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-publisher.md
- Title: What is OPC Publisher - Azure | Microsoft Docs
-description: This article provides an overview of the features of OPC Publisher. It allows you to publish encoded JSON telemetry data using a JSON payload, to Azure IoT Hub.
-- Previously updated : 06/10/2019-------
-# What is OPC Publisher?
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-OPC Publisher is a reference implementation that demonstrates how to:
--- Connect to existing OPC UA servers.-- Publish JSON encoded telemetry data from OPC UA servers in OPC UA Pub/Sub format, using a JSON payload, to Azure IoT Hub.-
-You can use any of the transport protocols that the Azure IoT Hub client SDK supports: HTTPS, AMQP, and MQTT.
-
-The reference implementation includes:
--- An OPC UA *client* for connecting to existing OPC UA servers you have on your network.-- An OPC UA *server* on port 62222 that you can use to manage what's published and offers IoT Hub direct methods to do the same.-
-You can download the [OPC Publisher reference implementation](https://github.com/Azure/iot-edge-opc-publisher) from GitHub.
-
-The application is implemented using .NET Core technology and can run on any platform supported by .NET Core.
-
-OPC Publisher implements retry logic to establish connections to endpoints that don't respond to a certain number of keep alive requests. For example, if an OPC UA server stops responding because of a power outage.
-
-For each distinct publishing interval to an OPC UA server, the application creates a separate subscription over which all nodes with this publishing interval are updated.
-
-OPC Publisher supports batching of the data sent to IoT Hub to reduce network load. This batching sends a packet to IoT Hub only if the configured packet size is reached.
-
-This application uses the OPC Foundation OPC UA reference stack as NuGet packages. See [https://opcfoundation.org/license/redistributables/1.3/](https://opcfoundation.org/license/redistributables/1.3/) for the licensing terms.
-
-## Next steps
-
-Now you've learned what OPC Publisher is, the suggested next step is to learn how to:
-
-[Configure OPC Publisher](howto-opc-publisher-configure.md)
iot-accelerators Overview Opc Twin Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-twin-architecture.md
- Title: OPC Twin architecture - Azure | Microsoft Docs
-description: This article provides an overview of the OPC Twin architecture. It describes about the discovery, activation, browsing, and monitoring of the server.
-- Previously updated : 11/26/2018------
-# OPC Twin architecture
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The following diagrams illustrate the OPC Twin architecture.
-
-## Discover and activate
-
-1. The operator enables network scanning on the module or makes a one-time discovery using a discovery URL. The discovered endpoints and application information are sent via telemetry to the onboarding agent for processing. The OPC UA device onboarding agent processes OPC UA server discovery events sent by the OPC Twin IoT Edge module when in discovery or scan mode. The discovery events result in application registration and updates in the OPC UA device registry.
-
- ![Diagram that shows the OPC Twin architecture with the OPC Twin IoT Edge module in discovery or scan mode.](media/overview-opc-twin-architecture/opc-twin1.png)
-
-1. The operator inspects the certificate of the discovered endpoint and activates the registered endpoint twin for access.ΓÇï
-
- ![Diagram that shows the OPC Twin architecture with the IoT Edge "Twin identity".](media/overview-opc-twin-architecture/opc-twin2.png)
-
-## Browse and monitor
-
-1. Once activated, the operator can use the Twin service REST API to browse or inspect the server information model, read/write object variables and call methods. The user uses a simplified OPC UA API expressed fully in HTTP and JSON.
-
- ![Diagram that shows the OPC Twin architecture setup for browsing and inspecting the server information model.](media/overview-opc-twin-architecture/opc-twin3.png)
-
-1. The twin service REST interface can also be used to create monitored items and subscriptions in the OPC Publisher. The OPC Publisher allows telemetry to be sent from OPC UA server systems to IoT Hub. For more information about OPC Publisher, see [What is OPC Publisher](overview-opc-publisher.md).
-
- ![How OPC Twin works](media/overview-opc-twin-architecture/opc-twin4.png)
iot-accelerators Overview Opc Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-twin.md
- Title: What is OPC Twin - Azure | Microsoft Docs
-description: This article provides an overview of OPC Twin. OPC Twin provides discovery, registration, and remote control of industrial devices through REST APIs.
-- Previously updated : 11/26/2018------
-# What is OPC Twin?
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-OPC Twin consists of microservices that use Azure IoT Edge and IoT Hub to connect the cloud and the factory network. OPC Twin provides discovery, registration, and remote control of industrial devices through REST APIs. OPC Twin does not require an OPC Unified Architecture (OPC UA) SDK, is programming language agnostic, and can be included in a serverless workflow. This article describes several OPC Twin use cases.
-
-## Discovery and control
-You can use OPC Twin for simple for discovery and registration.
-
-### Simple discovery and registration
-OPC Twin allows factory operators to scan the factory network, so that OPC UA servers can be discovered and registered. As an alternative, factory operators can also manually register OPC UA devices using a known discovery URL. ΓÇïFor example, to connect to all the OPC UA devices after the IoT Edge gateway with an OPC Twin module has been installed on the factory floor, the factory operator can remotely trigger a scan of the network and visually see all the OPC UA servers. ΓÇï
-ΓÇï
-### Simple control
-OPC Twin allows factory operators to react to events and reconfigure their factory floor machines from the cloud either automatically or manually on the fly. OPC Twin provides REST APIs to invoke services on the OPC UA server, browse its address space as well as to read/write variables and execute methods.ΓÇï For example, a boiler uses temperature KPI to control the production line. The temperature sensor publishes the change in data using OPC Publisher. The factory operator receives the alert that the temperature has reached the threshold. The production line cools down automatically through OPC Twin. The factory operator is notified of the cool down.ΓÇï
-ΓÇï
-## Authentication
-You can use OPC Twin for simple authentication and for a simple developer experience.
-
-### Simple authentication
-OPC Twin uses Azure Active Directory (AAD)-based authentication and auditing from end to end. ΓÇïFor example, OPC Twin enables the application to be built on top of OPC Twin to determine what an operator has performed on a machine. On the machine side, it's through OPC UA auditing. On the cloud side, it's through storing an immutable client audit log and AAD authentication on the REST API.ΓÇï
-ΓÇï
-### Simple developer experience
-OPC Twin can be used with applications written in any programming language through REST APIs. As developers integrate an OPC UA client into a solution, knowledge of the OPC UA SDK is not necessary. OPC Twin can seamlessly integrate into stateless, serverless architectures. ΓÇïFor example, a full stack web developer who develops an application for an alarm and event dashboard can write the logic to respond to events in JavaScript or TypeScript using OPC Twin without the knowledge of C, C#, or the full OPC UA stack implementation. ΓÇï
-
-## Next steps
-
-Now that you have learned about OPC Twin and its uses, here is the suggested next step:
-
-[What is OPC Vault](overview-opc-vault.md)
iot-accelerators Overview Opc Vault Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-vault-architecture.md
- Title: OPC Vault architecture - Azure | Microsoft Docs
-description: OPC Vault certificate management service architecture
-- Previously updated : 08/16/2019------
-# OPC Vault architecture
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article gives an overview about the OPC Vault microservice and the OPC Vault IoT Edge module.
-
-OPC UA applications use application instance certificates to provide application level security. A secure connection is established by using asymmetric cryptography, for which the application certificates provide the public and private key pair. The certificates can be self-signed, or signed by a Certificate Authority (CA).
-
-An OPC UA application has a list of trusted certificates that represents the applications it trusts. These certificates can be self-signed or signed by a CA, or can be a Root-CA or a Sub-CA themselves. If a trusted certificate is part of a larger certificate chain, the application trusts all certificates that chain up to the certificate in the trust list. This is true as long as the full certificate chain can be validated.
-
-The major difference between trusting self-signed certificates and trusting a CA certificate
-is the installation effort required to deploy and maintain trust. There's also additional effort to host a company-specific CA.
-
-To distribute trust for self-signed certificates for multiple servers with a single client application, you must install all server application certificates on the client application trust list. Additionally, you must install the client application certificate on all server application trust lists. This administrative effort is quite a burden, and even increases when you have to consider certificate lifetimes and renew certificates.
-
-The use of a company-specific CA can greatly simplify the management of trust with
-multiple servers and clients. In this case, the administrator generates a CA signed
-application instance certificate once for every client and server used. In addition, the CA Certificate is installed in every application trust list, on all servers and clients. With this approach, only expired certificates need to be renewed and replaced for the affected applications.
-
-Azure Industrial IoT OPC UA certificate management service helps you manage a company-specific CA for OPC UA applications. This service is based on the OPC Vault microservice. OPC Vault provides a microservice to host a company-specific CA in a secure cloud. This solution is backed by services secured by Azure Active Directory (Azure AD), Azure Key Vault with Hardware Security Modules (HSMs), Azure Cosmos DB, and optionally IoT Hub as an application store.
-
-The OPC Vault microservice is designed to support role-based workflow, where security
-administrators and approvers with signing rights in Azure Key Vault approve or reject requests.
-
-For compatibility with existing OPC UA solutions, the services include
-support for an OPC Vault microservice backed edge module. This implements the
-**OPC UA Global Discovery Server and Certificate Management** interface, to distribute certificates and trust lists according to Part 12 of the specification.
--
-## Architecture
-
-The architecture is based on the OPC Vault microservice, with an OPC Vault
-IoT Edge module for the factory network and a web sample UX to control the workflow:
-
-![Diagram of OPC Vault architecture](media/overview-opc-vault-architecture/opc-vault.png)
-
-## OPC Vault microservice
-
-The OPC Vault microservice consists of the following interfaces to implement
-the workflow to distribute and manage a company-specific CA for OPC UA applications.
-
-### Application
-- An OPC UA application can be a server or a client, or both. OPC Vault serves in this
-case as an application registration authority.
-- In addition to the basic operations to register, update, and unregister applications, there are also interfaces to find and query for applications with search expressions. -- The certificate requests must reference a valid application, in order to process a request and to issue a signed certificate with all OPC UA-specific extensions. -- The application service is backed by a database in Azure Cosmos DB.-
-### Certificate group
-- A certificate group is an entity that stores a root CA or a sub CA certificate, including the private key to sign certificates. -- The RSA key length, the SHA-2 hash length, and the lifetimes are configurable for both Issuer CA and signed application certificates. -- You store the CA certificates in Azure Key Vault, backed with FIPS 140-2 Level 2 HSM. The private key never leaves the secure storage, because signing is done by a Key Vault operation secured by Azure AD. -- You can renew the CA certificates over time, and have them remain in safe storage due to Key Vault history. -- The revocation list for each CA certificate is also stored in Key Vault as a secret. When an application is unregistered, the application certificate is also revoked in the Certificate Revocation List (CRL) by an administrator.-- You can revoke single certificates, as well as batched certificates.-
-### Certificate request
-A certificate request implements the workflow to generate a new key pair or a signed certificate, by using a Certificate Signing Request (CSR) for an OPC UA application.
-- The request is stored in a database with accompanying information, like the subject or a CSR, and a reference to the OPC UA application. -- The business logic in the service validates the request against the information stored in the application database. For example, the application Uri in the database must match the application Uri in the CSR.-- A security administrator with signing rights (that is, the Approver role) approves or rejects the request. If the request is approved, a new key pair or signed certificate (or both) are generated. The new private key is securely stored in Key Vault, and the new signed public certificate is stored in the certificate request database.-- The requester can poll the request status until it is approved or revoked. If the request was approved, the private key and the certificate can be downloaded and installed in the certificate store of the OPC UA application.-- The requestor can now accept the request to delete unnecessary information from the request database. -
-Over the lifetime of a signed certificate, an application might be deleted or a key might become compromised. In such a case, a CA manager can:
-- Delete an application, which also deletes all pending and approved certificate requests of the app. -- Delete just a single certificate request, if only a key is renewed or compromised.-
-Now compromised approved and accepted certificate requests are marked as deleted.
-
-A manager can regularly renew the Issuer CA CRL. At the renewal time, all the deleted certificate requests are revoked, and the certificate serial numbers are added to the CRL revocation list. Revoked certificate requests are marked as revoked. In urgent events, single certificate requests can be revoked, too.
-
-Finally, the updated CRLs are available for distribution to the participating OPC UA clients and servers.
-
-## OPC Vault IoT Edge module
-To support a factory network Global Discovery Server, you can deploy the OPC Vault module on the edge. Run it as a local .NET Core application, or start it in a Docker container. Note that because of a lack of Auth2 authentication support in the current OPC UA .NET Standard stack, the functionality of the OPC Vault edge module is limited to a Reader role. A user can't be impersonated from the edge module to the microservice by using the OPC UA GDS standard interface.
-
-## Next steps
-
-Now that you have learned about the OPC Vault architecture, you can:
-
-> [!div class="nextstepaction"]
-> [Build and deploy OPC Vault](howto-opc-vault-deploy.md)
iot-accelerators Overview Opc Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-vault.md
- Title: What is OPC Vault - Azure | Microsoft Docs
-description: This article provides an overview of OPC Vault. It can configure, register, and manage certificate lifecycle for OPC UA applications in the cloud.
-- Previously updated : 11/26/2018------
-# What is OPC Vault?
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-OPC Vault is a microservice that can configure, register, and manage certificate lifecycle for OPC UA server and client applications in the cloud. This article describes the OPC Vault's simple use cases.
-
-## Certificate management
-ΓÇï
-For example, ΓÇïa manufacturing company needs to connect their OPC UA server machine to their newly built client application. When the manufacturer makes the initial access of the server machine, an error message is immediately shown on the OPC UA server application to indicate that the client application is not secure. This mechanism is built in the OPC UA server machine to prevent any unauthorized application access, which prevents vicious hacking on the shop floor.ΓÇï
-
-## Application security management
-A security professional uses OPC Vault microservice to easily enable OPC UA server to communicate with any client application, because OPC Vault has all the functions for certificate registry, storage, and lifecycle management. ΓÇïNow the OPC UA server is securely connected, it can communicate to the newly built client application
-
-## The complete OPC Vault architecture
-The following diagram illustrates the complete OPC Vault architecture.
-
-![OPC Vault architecture](media/overview-opc-vault-architecture/opc-vault.png)
-
-## Next steps
-
-Now that you have learned about OPC Vault and its uses, here is the suggested next step:
-
-[OPC Vault architecture](overview-opc-vault-architecture.md)
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-dashboards.md
The following screenshot shows the dashboard in an application created from the
:::image type="content" source="media/howto-manage-dashboards/dashboard-sample-contoso.png" alt-text="Dashboard for applications based on the custom application template":::
-After you select **Edit** or **New**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard, and customize and remove tiles on the dashboard itself. For example, to add a **Telemetry** tile to show current temperature reported by one or more devices:
+After you select **Edit** or **New**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard, and customize and remove tiles on the dashboard itself. For example, to add a **Line Chart** tile to track telemetry values over time reported by one or more devices:
-1. Select a **Device Group** and then choose your devices in the **Devices** dropdown to show on the tile. You now see the available telemetry, properties, and commands from the devices.
-
-1. If needed, use the dropdown to select a telemetry value to show on the tile. Select **+ Telemetry**, **+ Property**, or **+ Cloud Property** to add more items to the tile.
+1. Select **Start with a Visual**, then choose **Line chart**, and then select **Add tile** or just drag and drop it on to the canvas.
+
+1. To configure the tile, select its gear icon. Enter a **Title** and select a **Device Group** and then choose your devices in the **Devices** dropdown to show on the tile.
:::image type="content" source="media/howto-manage-dashboards/device-details.png" alt-text="Add a temperature telemetry tile to the dashboard":::
-When you've selected all the values to show on the tile, select **Add tile**. The tile appears on the dashboard where you can change the visualization, resize it, move it, and configure it.
+When you've selected all the values to show on the tile, click **Update**
When you've finished adding and customizing tiles on the dashboard, select **Save** to save the changes to the dashboard, which takes you out of edit mode.
The following table describes the different types of tile you can add to a dashb
| Image | Image tiles display a custom image and can be clickable. The URL can be a relative link to another page in the application, or an absolute link to an external site.| | Label | Label tiles display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard such descriptions, contact details, or help.| | Count | Count tiles display the number of devices in a device group.|
-| Map | Map tiles display the [location](howto-use-location-data.md) of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display sampled route of where a device has been on the past week.|
+| Map(telemetry) | Map tiles display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display sampled route of where a device has been on the past week.|
+| Map(property) | Map tiles display the location of one or more devices on a map.|
| KPI | KPI tiles display aggregate telemetry values for one or more devices over a time period. For example, you can use it to show the maximum temperature and pressure reached for one or more devices during the last hour.| | Line chart | Line chart tiles plot one or more aggregate telemetry values for one or more devices for a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices for the last hour.| | Bar chart | Bar chart tiles plot one or more aggregate telemetry values for one or more devices for a time period. For example, you can display a bar chart to show the average temperature and pressure of one or more devices over the last hour.|
The following table describes the different types of tile you can add to a dashb
| Last Known Value | Last known value tiles display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices. | | Event History | Event History tiles display the events for a device over a time period. For example, you can use it to show all the valve open and close events for one or more devices during the last hour.| | Property | Property tiles display the current value for properties and cloud properties of one or more devices. For example, you can use this tile to display device properties such as the manufacturer or firmware version for a device. |
+| State Chart | State chart plot changes for one or more devices over a set time range. For example, you can use this tile to display device properties such as the temperature changes for a device. |
+| Event Chart | Event chart displays telemetry events for one or more devices over a set time range. For example, you can use this tile to display the properties such as the temperature changes for a device. |
+| State History | State history lists and displays status changes for State telemetry.|
+| External Content | External content tile allows you to load external content from an external source. |
Currently, you can add up to 10 devices to tiles that support multiple devices.
The following screenshot shows the effect of the conditional formatting rule:
### Tile formatting
-This feature, available in KPI, LKV, and property tiles, lets users adjust font size, choose decimal precision, abbreviate numeric values (for example format 1,700 as 1.7K), or wrap string values in their tiles.
+This feature, available in KPI, LKV, and Property tiles, lets users adjust font size, choose decimal precision, abbreviate numeric values (for example format 1,700 as 1.7K), or wrap string values in their tiles.
:::image type="content" source="media/howto-manage-dashboards/tile-format.png" alt-text="Tile Format":::
machine-learning Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/execute-r-script.md
azureml_main <- function(dataframe1, dataframe2){
} ```
-## Uploading files
-The Execute R Script module supports uploading files by using the Azure Machine Learning R SDK.
-
-The following sample shows how to upload an image file in Execute R Script:
-```R
-azureml_main <- function(dataframe1, dataframe2){
- print("R script run.")
-
- # Generate a jpeg graph
- img_file_name <- "rect.jpg"
- jpeg(file=img_file_name)
- example(rect)
- dev.off()
-
- upload_files_to_run(names = list(file.path("graphic", img_file_name)), paths=list(img_file_name))
--
- # Return datasets as a Named List
- return(list(dataset1=dataframe1, dataset2=dataframe2))
-}
-```
-
-After the pipeline run is finished, you can preview the image in the right panel of the module.
-
-> [!div class="mx-imgBorder"]
-> ![Preview of uploaded image](media/module/upload-image-in-r-script.png)
## How to configure Execute R Script
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-azure-machine-learning-architecture.md
The studio is also where you access the interactive tools that are part of Azure
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + Interact with the service in any Python environment with the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-+ Interact with the service in any R environment with the [Azure Machine Learning SDK for R](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html) (preview).
+ Use [Azure Machine Learning designer](concept-designer.md) to perform the workflow steps without writing code. + Use [Azure Machine Learning CLI](./reference-azure-machine-learning-cli.md) for automation.
-+ The [Many Models Solution Accelerator](https://aka.ms/many-models) (preview) builds on Azure Machine Learning and enables you to train, operate, and manage hundreds or even thousands of machine learning models.
## Next steps
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
Following tools and environments are already installed on the compute instance:
|-|:-:| |RStudio Server Open Source Edition (preview)|| |R kernel||
-|Azure Machine Learning SDK for R|[azuremlsdk](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html)</br>SDK samples|
|**PYTHON** tools & environments|Details| |-|-|
Following tools and environments are already installed on the compute instance:
|Conda packages|`cython`</br>`numpy`</br>`ipykernel`</br>`scikit-learn`</br>`matplotlib`</br>`tqdm`</br>`joblib`</br>`nodejs`</br>`nb_conda_kernels`| |Deep learning packages|`PyTorch`</br>`TensorFlow`</br>`Keras`</br>`Horovod`</br>`MLFlow`</br>`pandas-ml`</br>`scrapbook`| |ONNX packages|`keras2onnx`</br>`onnx`</br>`onnxconverter-common`</br>`skl2onnx`</br>`onnxmltools`|
-|Azure Machine Learning Python & R SDK samples||
+|Azure Machine Learning Python samples||
Python packages are all installed in the **Python 3.8 - AzureML** environment. Compute instance has Ubuntu 18.04 as the base OS.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
You can create Azure Machine Learning compute instances or compute clusters from
* The Python SDK and the Azure CLI: * [Compute instance](how-to-create-manage-compute-instance.md). * [Compute cluster](how-to-create-attach-compute-cluster.md).
-* The [R SDK](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html#section-compute-targets) (preview).
* An Azure Resource Manager template. For an example template, see [Create an Azure Machine Learning compute cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-amlcompute). * A machine learning [extension for the Azure CLI](reference-azure-machine-learning-cli.md#resource-management).
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-environments.md
To update the package, specify a version number to force image rebuild, for exam
* Learn how to [create and use environments](how-to-use-environments.md) in Azure Machine Learning. * See the Python SDK reference documentation for the [environment class](/python/api/azureml-core/azureml.core.environment%28class%29).
-* See the R SDK reference documentation for [environments](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html#section-environments).
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-train-machine-learning-model.md
The designer lets you train models using a drag and drop interface in your web b
+ [What is the designer?](concept-designer.md) + [Tutorial: Predict automobile price](tutorial-designer-automobile-price-train-score.md)
-## Many models solution accelerator
-
-The [Many Models Solution Accelerator](https://aka.ms/many-models) (preview) builds on Azure Machine Learning and enables you to train, operate, and manage hundreds or even thousands of machine learning models.
-
-For example, building a model __for each instance or individual__ in the following scenarios can lead to improved results:
-
-* Predicting sales for each individual store
-* Predictive maintenance for hundreds of oil wells
-* Tailoring an experience for individual users.
-
-For more information, see the [Many Models Solution Accelerator](https://aka.ms/many-models) on GitHub.
- ## Azure CLI The machine learning CLI is an extension for the Azure CLI. It provides cross-platform CLI commands for working with Azure Machine Learning. Typically, you use the CLI to automate tasks, such as training a machine learning model.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
You can interact with your workspace in the following ways:
+ [Azure Machine Learning studio ](https://ml.azure.com) + [Azure Machine Learning designer](concept-designer.md) + In any Python environment with the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-+ In any R environment with the [Azure Machine Learning SDK for R (preview)](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html).
+ On the command line using the Azure Machine Learning [CLI extension](./reference-azure-machine-learning-cli.md) + [Azure Machine Learning VS Code Extension](how-to-manage-resources-vscode.md#workspaces)
Machine learning tasks read and/or write artifacts to your workspace.
You can also perform the following workspace management tasks:
-| Workspace management task | Portal | Studio | Python SDK / R SDK | Azure CLI | VS Code
+| Workspace management task | Portal | Studio | Python SDK | Azure CLI | VS Code
||||||| | Create a workspace | **&check;** | | **&check;** | **&check;** | **&check;** | | Manage workspace access | **&check;** || | **&check;** ||
When you create a new workspace, it automatically creates several Azure resource
+ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace. > [!NOTE]
-> You can instead use existing Azure resource instances when you create the workspace with the [Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace), [R SDK](https://azure.github.io/azureml-sdk-for-r/reference/create_workspace.html), or the Azure Machine Learning CLI [using an ARM template](how-to-create-workspace-template.md).
+> You can instead use existing Azure resource instances when you create the workspace with the [Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace) or the Azure Machine Learning CLI [using an ARM template](how-to-create-workspace-template.md).
<a name="wheres-enterprise"></a>
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/tools-included.md
The Data Science Virtual Machine comes with the most useful data-science tools p
| [NVidia System Management Interface (nvidia-smi)](https://developer.nvidia.com/nvidia-system-management-interface) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [nvidia-smi on the DSVM](./dsvm-tools-deep-learning-frameworks.md#nvidia-system-management-interface-nvidia-smi) | | [PyTorch](https://pytorch.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [PyTorch on the DSVM](./dsvm-tools-deep-learning-frameworks.md#pytorch) | | [TensorFlow](https://www.tensorflow.org) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [TensorFlow on the DSVM](./dsvm-tools-deep-learning-frameworks.md#tensorflow) |
-| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (R, Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python/R SDK,CLI, samples) | [Azure ML SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
+| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure ML SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
| [XGBoost](https://github.com/dmlc/xgboost) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | [XGBoost on the DSVM](./dsvm-tools-data-science.md#xgboost) | | [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [Vowpal Wabbit on the DSVM](./dsvm-tools-data-science.md#vowpal-wabbit) | | [Weka](https://www.cs.waikato.ac.nz/ml/weka/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in this section are used to install R packages, and are required durin
| - | - | | **cloud.r-project.org** | Used when installing CRAN packages. |
-> [!IMPORTANT]
-> Internally, the R SDK for Azure Machine Learning uses Python packages. So you must also allow Python hosts through the firewall.
## Next steps * [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md)
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-assign-roles.md
Allows you to define a role scoped only to labeling data:
} ```
+### Labeling Team Lead
+
+Allows you to review and reject the labeled dataset and view labeling insights. In addition to it, this role also allows you to perform the role of a labeler.
+
+`labeling_team_lead_custom_role.json` :
+```json
+{
+ "properties": {
+ "roleName": "Labeling Team Lead",
+ "description": "Team lead for Labeling Projects",
+ "assignableScopes": [
+ "/subscriptions/<subscription_id>"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/write",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read"
+ ],
+ "notActions": [
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/write",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/delete",
+ "Microsoft.MachineLearningServices/workspaces/labeling/export/action"
+ ],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+ ## Troubleshooting Here are a few things to be aware of while you use Azure role-based access control (Azure RBAC):
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
new_dataset = ds.partition_by(name="repartitioned_ds", partition_keys=['country'
partition_keys = new_dataset.partition_keys # ['country'] ```
->[!IMPORTANT]
-> TabularDataset partitions can also be applied in Azure Machine Learning pipelines as input to your ParallelRunStep in many models applications. See an example in the [Many Models accelerator documentation](https://github.com/microsoft/solution-accelerator-many-models/blob/master/01_Data_Preparation.ipynb).
- ## Explore data After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
machine-learning Overview What Happened To Workbench https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-happened-to-workbench.md
Last updated 03/05/2020
The Azure Machine Learning Workbench application and some other early features were deprecated and replaced in the **September 2018** release to make way for an improved [architecture](concept-azure-machine-learning-architecture.md).
-To improve your experience, the release contains many significant updates prompted by customer feedback. The core functionality from experiment runs to model deployment hasn't changed. But now, you can use the robust <a href="/python/api/overview/azure/ml/intro" target="_blank">Python SDK</a>, R SDK, and the [Azure CLI](reference-azure-machine-learning-cli.md) to accomplish your machine learning tasks and pipelines.
+To improve your experience, the release contains many significant updates prompted by customer feedback. The core functionality from experiment runs to model deployment hasn't changed. But now, you can use the robust <a href="/python/api/overview/azure/ml/intro" target="_blank">Python SDK</a>, and the [Azure CLI](reference-azure-machine-learning-cli.md) to accomplish your machine learning tasks and pipelines.
Most of the artifacts that were created in the earlier version of Azure Machine Learning are stored in your own local or cloud storage. These artifacts won't ever disappear.
machine-learning Overview What Is Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-azure-ml.md
Azure Machine Learning provides all the tools developers and data scientists nee
+ Jupyter notebooks: use our [example notebooks](https://github.com/Azure/MachineLearningNotebooks) or create your own notebooks to leverage our <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK for Python</a> samples for your machine learning.
-+ The [Many Models Solution Accelerator](https://aka.ms/many-models) (preview) builds on Azure Machine Learning and enables you to train, operate, and manage hundreds or even thousands of machine learning models.
- + [Machine learning extension for Visual Studio Code (preview)](how-to-set-up-vs-code-remote.md) provides you with a full-featured development environment for building and managing your machine learning projects. + [Machine learning CLI](reference-azure-machine-learning-cli.md) is an Azure CLI extension that provides commands for managing with Azure Machine Learning resources from the command line.
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| Jupyter, JupyterLab Integration | GA | YES | YES | | Virtual Network (VNet) support | Public Preview | YES | YES | | **SDK support** | | | |
-| [R SDK support](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html) | Public Preview | YES | YES |
| [Python SDK support](/python/api/overview/azure/ml/) | GA | YES | YES | | **[Security](concept-enterprise-security.md)** | | | | | Virtual Network (VNet) support for training | GA | YES | YES |
The information in the rest of this document provides information on what featur
| **Other** | | | | | [Open Datasets](/azure/open-datasets/samples) | Public Preview | YES | YES | | [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
-| [Many Models Solution Accelerator](https://github.com/microsoft/solution-accelerator-many-models) | Public Preview | NO | NO |
### Azure Government scenarios
The information in the rest of this document provides information on what featur
| Jupyter, JupyterLab Integration | GA | YES | N/A | | Virtual Network (VNet) support | Public Preview | YES | N/A | | **SDK support** | | | |
-| R SDK support | Public Preview | YES | N/A |
| Python SDK support | GA | YES | N/A | | **Security** | | | | | Virtual Network (VNet) support for training | GA | YES | N/A |
The information in the rest of this document provides information on what featur
| **Other** | | | | | Open Datasets | Public Preview | YES | N/A | | Custom Cognitive Search | Public Preview | YES | N/A |
-| Many Models | Public Preview | NO | N/A |
marketplace Create Managed Service Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-listing.md
Title: How to configure your Managed Service offer listing details in Microsoft Partner Center
-description: Learn how to configure your Managed Service offer listing details on Azure Marketplace using Partner Center.
+ Title: Configure Managed Service offer listing details in Microsoft Partner Center
+description: Configure Managed Service offer listing details on Azure Marketplace.
Previously updated : 12/23/2020 Last updated : 07/12/2021
-# How to configure your Managed Service offer listing details
+# Configure Managed Service offer listing details
The information you provide on the **Offer listing** page of Partner Center will be displayed on Azure Marketplace. This includes your offer name, description, media, and other marketing assets.
If you have support websites for Azure Global Customers and/or Azure Government
Under **Logos**, upload a **Large** logo in .PNG format between 216 x 216 and 350 x 350 pixels. Partner Center will automatically create **Medium** and **Small** logos, which you can replace later.
-* The large logo (from 216 x 216 to 350 x 350 px) appears on your offer listing on Azure Marketplace.
-* The medium logo (90 x 90 px) is shown when a new resource is created.
-* The small logo (48 x 48 px) is used on the Azure Marketplace search results.
+- The large logo (from 216 x 216 to 350 x 350 px) appears on your offer listing on Azure Marketplace.
+- The medium logo (90 x 90 px) is shown when a new resource is created.
+- The small logo (48 x 48 px) is used on the Azure Marketplace search results.
### Add screenshots (optional)
You can add links to YouTube or Vimeo videos that demonstrate your offer. These
3. Drag and drop a PNG file (1280 x 720 pixels) onto the gray **Thumbnail** box. 4. To add another video, repeat steps 1 through 3.
-Select **Save draft** before continuing to the next tab: **Preview audience**.
+Select **Save draft** before continuing to the next tab, **Preview audience**.
## Next steps
-* [Add a preview audience](create-managed-service-offer-preview.md)
+- [Add a preview audience](create-managed-service-offer-preview.md)
marketplace Create Managed Service Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-plans.md
Title: How to create plans for your Managed Service offer on Azure Marketplace
-description: Learn how to create plans for your Managed Service offer on Azure Marketplace using Microsoft Partner Center.
+ Title: Create plans for a Managed Service offer on Azure Marketplace
+description: Create plans for a Managed Service offer on Azure Marketplace.
-+ Previously updated : 12/23/2020 Last updated : 07/12/2021
-# How to create plans for your Managed Service offer
+# Create plans for a Managed Service offer
Managed Service offers sold through the Microsoft commercial marketplace must have at least one plan. You can create a variety of plans with different options within the same offer. These plans (sometimes referred to as SKUs) can differ in terms of version, monetization, or tiers of service. For detailed guidance on plans, see [Plans and pricing for commercial marketplace offers](./plans-pricing.md).
Authorizations define the entities in your managing tenant who can access resour
You can create up to 20 authorizations for each plan. > [!TIP]
-> In most cases, you'll want to assign roles to an Azure AD user group or service principal, rather than to a series of individual user accounts. This lets you add or remove access for individual users without having to update and republish the plan when your access requirements change. When assigning roles to Azure AD groups, [the group type should be Security and not Office 365](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). For additional recommendations, see [Tenants, roles, and users in Azure Lighthouse scenarios](../lighthouse/concepts/tenants-users-roles.md).
+> In most cases, you'll want to assign roles to an Azure AD user group or service principal, rather than to a series of individual user accounts. This lets you add or remove access for individual users without having to update and republish the plan when your access requirements change. When assigning roles to Azure AD groups, the [group type](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) should be Security and not Office 365. For additional recommendations, see [Tenants, roles, and users in Azure Lighthouse scenarios](../lighthouse/concepts/tenants-users-roles.md).
-For each Authorization, you'll need to provide the following information. You can then select **+ Add authorization** as many times as needed to add more users and role definitions.
+Provide the following information for each **Authorization**. Select **+ Add authorization** as needed to add more users and role definitions.
-* **AAD Object ID**: the Azure AD identifier of a user, user group, or application that will be granted certain permissions (as defined by the Role Definition) to your customers' resources.
-* **AAD Object Display Name**: a friendly name to help the customer understand the purpose of this authorization. The customer will see this name when delegating resources.
-* **Role definition**: select one of the available Azure AD built-in roles from the list. This role will determine the permissions that the user in the **Principal ID** field will have on your customers' resources. For descriptions of these roles, see [Built-in roles](../role-based-access-control/built-in-roles.md) and [Role support for Azure Lighthouse](../lighthouse/concepts/tenants-users-roles.md#role-support-for-azure-lighthouse).
-
-> [!NOTE]
-> As applicable new built-in roles are added to Azure, they will become available here. There may be some delay before they appear.
-
-* **Assignable Roles**: this option will appear only if you have selected User Access Administrator in the **Role Definition** for this authorization. If so, you must add one or more assignable roles here. The user in the **Azure AD Object ID** field will be able to assign these roles to managed identities, which is required in order to [deploy policies that can be remediated](../lighthouse/how-to/deploy-policy-remediation.md). No other permissions normally associated with the User Access Administrator role will apply to this user.
+- **Display Name**: A friendly name to help the customer understand the purpose of this authorization. The customer will see this name when delegating resources.
+- **Principal ID**: The Azure AD identifier of a user, user group, or application that will be granted certain permissions (as defined by the Role Definition) to your customers' resources.
+- **Access type**: **Active** authorizations have the privileges assigned to the role at all times. Each plan must have at least one Active authorization. **Eligible** authorizations are time-limited and require activation by the customer. Eligible authorizations can be set with a maximum duration and an option to require multifactor authorization to activate for security purposes.
+- **Role**: Select one of the available Azure AD built-in roles from the list. This role will determine the permissions that the user in the **Principal ID** field will have on your customers' resources. For descriptions of these roles, see [Built-in roles](../role-based-access-control/built-in-roles.md) and [Role support for Azure Lighthouse](../lighthouse/concepts/tenants-users-roles.md#role-support-for-azure-lighthouse).
+ > [!NOTE]
+ > As applicable new built-in roles are added to Azure, they will become available here, although there may be some delay before they appear.
+- **Assignable Roles**: This option will appear only if you have selected User Access Administrator in the **Role Definition** for this authorization. If so, you must add one or more assignable roles here. The user in the **Azure AD Object ID** field will be able to assign these roles to [managed identities](../active-directory/managed-identities-azure-resources/overview.md), which is required in order to [deploy policies that can be remediated](../lighthouse/how-to/deploy-policy-remediation.md). No other permissions normally associated with the User Access Administrator role will apply to this user.
> [!TIP] > To ensure you can [remove access to a delegation](../lighthouse/how-to/remove-delegation.md) if needed, include an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role). If this role is not assigned, delegated resources can only be removed by a user in the customer's tenant.
-Once you've completed all sections for your plan, you can select **+ Create new plan** to create additional plans. When youΓÇÖre done, select **Save draft** before continuing.
+Once you've completed all of the sections for your plan, you can select **+ Create new plan** to create additional plans. When you're done, select **Save draft**. When you're done creating plans, select **Plans** in the breadcrumb trail at the top of the window to return to the left-nav menu for the offer.
+
+## Updating an offer
+
+You can [publish an updated version of your offer](update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](../lighthouse/how-to/view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to review the changes and decide whether they want to update to the new version.
## Next steps
-* [Review and publish](review-publish-offer.md)
+- Exit plan setup and continue with optional [Co-sell with Microsoft](./co-sell-overview.md), or
+- [Review and publish your offer](review-publish-offer.md)
marketplace Create Managed Service Offer Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-preview.md
Title: How to add a preview audience for your Managed Service offer
-description: Learn how to add a preview audience for your Managed Service offer in Microsoft Partner Center.
+ Title: Add a preview audience for a Managed Service offer
+description: Add a preview audience for a Managed Service offer in Azure Marketplace.
Previously updated : 12/23/2020 Last updated : 7/16/2021
-# How to add a preview audience for your Managed Service offer
+# Add a preview audience for a Managed Service offer
This article describes how to configure a preview audience for a Managed Service offer in the commercial marketplace using Partner Center. The preview audience can review your offer before it goes live.
Add at least one Azure subscription ID, either individually (up to 10) or by upl
5. Save the file as a CSV file. 6. On the **Preview audience** page, select the **Import Audience (csv)** link. 7. In the **Confirm** dialog box, select **Yes**, then upload the CSV file.
-8. Select **Save draft** before continuing to the next tab.
+
+Select **Save draft** before continuing to the next tab, **Plan overview**.
## Next steps
marketplace Create Managed Service Offer Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-properties.md
+
+ Title: Configure Managed Service offer properties for Azure Marketplace
+description: Configure Managed Service offer properties for Azure Marketplace.
++++++ Last updated : 07/12/2021++
+# Configure Managed Service offer properties
+
+This page lets you define the categories used to group your offer on Azure Marketplace and the legal contracts that support your offer. This information ensures your Managed Service is displayed correctly on the online store and offered to the right set of customers.
+
+## Categories
+
+Select at least one and up to five categories to place your offer in the appropriate marketplace search areas. Be sure to describe later in the offer description how your offer supports these categories.
+
+## Provide terms and conditions
+
+Under **Legal**, provide your terms and conditions for this offer. Customers will be required to accept them before using the offer. You can also provide the URL where your terms and conditions can be found.
+
+Select **Save draft** before continuing to the next tab, **Offer listing**.
+
+## Next step
+
+- Configure [Offer listing](create-managed-service-offer-listing.md)
marketplace Create Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer.md
Title: How to create a Managed Service offer in the Microsoft commercial marketplace
-description: Learn how to create a new Managed Service offer for Azure Marketplace using the commercial marketplace program in Microsoft Partner Center.
+ Title: Create a Managed Service offer in Azure Marketplace
+description: Create a new Managed Service offer for Azure Marketplace.
Previously updated : 12/23/2020 Last updated : 07/12/2021
-# How to create a Managed Service offer for the commercial marketplace
+# Create a Managed Service offer for the commercial marketplace
This article explains how to create a Managed Service offer for the Microsoft commercial marketplace using Partner Center.
To publish a Managed Service offer, you must have earned a Gold or Silver Micros
4. In the **New offer** dialog box, enter an **Offer ID**. This is a unique identifier for each offer in your account. This ID is visible in the URL of the commercial marketplace listing and Azure Resource Manager templates, if applicable. For example, if you enter test-offer-1 in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
- * Each offer in your account must have a unique offer ID.
- * Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
- * The Offer ID can't be changed after you select **Create**.
+ - Each offer in your account must have a unique offer ID.
+ - Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
+ - The Offer ID can't be changed after you select **Create**.
5. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers. 6. To generate the offer and continue, select **Create**.
-## Configure lead management
+## Setup details
+
+This section does not apply for this offer type.
+
+## Customer leads
Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest in your consulting service. You can modify this connection at any time during or after you create the offer. For detailed guidance, see [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md).
To configure the lead management in Partner Center:
3. In the **Connection details** dialog box, select a lead destination from the list. 4. Complete the fields that appear. For detailed steps, see the following articles:
- * [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
- * [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
- * [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
- * [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
- * [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+ - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
5. To validate the configuration you provided, select the **Validate link**. 6. When youΓÇÖve configured the connection details, select **Connect**.
Under **Categories**, select at least one and up to five categories for grouping
Under **Legal**, provide your terms and conditions for this offer. Customers will be required to accept them before using the offer. You can also provide the URL where your terms and conditions can be found.
-Select **Save draft** before continuing.
+Select **Save draft** before continuing to the next tab, **Properties**.
## Next step
-* [Configure your Managed Service offer listing](./create-managed-service-offer-listing.md)
+- Configure offer [Properties](create-managed-service-offer-properties.md)
media-services Media Services Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/monitoring/media-services-event-schemas.md
The data object has the following properties:
| state | string | State of the live event. | | healthy | bool | Indicates whether ingest is healthy based on the counts and flags. Healthy is true if overlapCount = 0 && discontinuityCount = 0 && nonIncreasingCount = 0 && unexpectedBitrate = false. | | lastFragmentArrivalTime | string |The last time stamp in UTC that a fragment arrived at the ingest endpoint. Example date format is "2020-11-11 12:12:12:888999" |
-| ingestDriftValue | string | Measures the drift between the timestamp of the ingested content and the system clock of the ingest endpoint, measured in seconds (unit is an int64 string value). A non zero value indicates that the ingested content is arriving slower than system clock time. In other cases you will see the value 0 when there is no measured drift, or "n/a" when there are no incoming fragments. |
+| ingestDriftValue | string | Indicates the speed of delay, in seconds-per-minute, of the incoming audio or video data during the last minute. The value is greater than zero if data is arriving to the live event slower than expected in the last minute; zero if data arrived with no delay; and "n/a" if no audio or video data was received. Please note, this value is unrelated to the presence or absence of missing data in the last minute. For example, if you have a contribution encoder sending in live content, and it is slowing down due to processing issues, or network latency, it may be only able to deliver a total of 58 seconds of audio or video in a one minute period. This would be reported as 2 seconds of drift. If the encoder is able to catch up and send all 60 seconds of data every minute you will see this value reported as 0. If there was a disconnection, or discontinuity from the encoder, this value may still display as 0, as it does not account for breaks in the data - only data that is delayed in timestamps.|
| transcriptionState | string | This value is "On" for audio track heartbeats if live transcription is turned on, otherwise you will see an empty string. This state is only applicable to tracktype of "audio" for Live transcription. All other tracks will have an empty value.| | transcriptionLanguage | string | The language code (in BCP-47 format) of the transcription language. For example ΓÇ£de-deΓÇ¥ indicates German (Germany). The value is empty for the video track heartbeats, or when live transcription is turned off. |
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
See the [LiveEventIngestHeartbeat schema](./monitoring/media-services-event-sche
| New LiveEventIngestHeartbeat property | Description | | -- | - | | lastFragmentArrivalTime | The last time stamp in UTC that a fragment arrived at the ingest endpoint. Example date format is "2020-11-11 12:12:12:888999" |
-| ingestDriftValue | Measures the drift between the timestamp of the ingested content and the system clock in the ingest endpoint, measured in integer seconds per minute. A non zero value indicates that the ingested content is arriving slower than system clock time In other cases you will see 0, or "n/a" when there are no incoming fragments.|
+| ingestDriftValue | Indicates the speed of delay, in seconds-per-minute, of the incoming audio or video data during the last minute. The value is greater than zero if data is arriving to the live event slower than expected in the last minute; zero if data arrived with no delay; and "n/a" if no audio or video data was received. Please note, this value is unrelated to the presence or absence of missing data in the last minute. For example, if you have a contribution encoder sending in live content, and it is slowing down due to processing issues, or network latency, it may be only able to deliver a total of 58 seconds of audio or video in a one minute period. This would be reported as 2 seconds of drift. If the encoder is able to catch up and send all 60 seconds of data every minute you will see this value reported as 0. If there was a disconnection, or discontinuity from the encoder, this value may still display as 0, as it does not account for breaks in the data - only data that is delayed in timestamps. |
| transcriptionState | The state of the live transcription feature. This state is only applicable to tracktype of "audio" for Live transcription. All other tracks will have an empty value, or empty when disabled.| | transcriptionLanguage | The BCP-47 language code used for this track if the tracktype is "audio". When transcriptionState is empty (off) this will have an empty value. All other non-audio tracks will also contain an empty value. |
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/quickstart-mysql-github-actions.md
You will use the connection string as a GitHub secret.
name: MySQL for GitHub Actions on:
- push:
- branches: [ master ]
- pull_request:
- branches: [ master ]
-
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
- jobs:
- build:
- runs-on: windows-latest
- steps:
- - uses: actions/checkout@v1
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- - uses: azure/mysql@v1
- with:
- server-name: MYSQL_SERVER_NAME
- connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
- sql-file: './data.sql'
-
- # Azure logout
- - name: logout
- run: |
- az logout
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ - uses: azure/mysql@v1
+ with:
+ server-name: MYSQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
+ sql-file: './data.sql'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
``` ## Review your deployment
When your Azure MySQL database and repository are no longer needed, clean up the
## Next steps > [!div class="nextstepaction"]
-> [Learn about Azure and GitHub integration](/azure/developer/github/)
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics-policy-portal.md
++
+ Title: Deploy and manage Traffic Analytics using Azure Policy
+
+description: This article explains how to use the built-in policies to manage the deployment of Traffic Analytics
+
+documentationcenter: na
+++
+ms.devlang: na
+
+ na
+ Last updated : 07/11/2021++++
+# Deploy and manage Traffic Analytics using Azure Policy
+
+Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. In this article, we will cover three built-in policies available for [Traffic Analytics](./traffic-analytics.md) to manage your setup.
+
+If you are creating an Azure policy for the first time, you can read through:
+- [Azure Policy overview](../governance/policy/overview.md)
+- [Tutorial for creating policy](../governance/policy/assign-policy-portal.md#create-a-policy-assignment).
++
+## Locate the policies
+1. Go to the Azure portal ΓÇô [portal.azure.com](https://portal.azure.com)
+
+Navigate to Azure Policy page by searching for Policy in the top search bar
+![Policy Home Page](./media/network-watcher-builtin-policy/1_policy-search.png)
+
+2. Head over to the **Assignments** tab from the left pane
+
+![Assignments Tab](./media/network-watcher-builtin-policy/2_assignments-tab.png)
+
+3. Click on **Assign Policy** button
+
+![Assign Policy Button](./media/network-watcher-builtin-policy/3_assign-policy-button.png)
+
+4. Click the three dots menu under "Policy Definitions" to see available policies
+
+5. Use the Type filter and choose "Built-in". Then search for "traffic analytics "
+
+You should see the three built-in policies
+![Policy List for traffic analytics](./media/traffic-analytics/policy-filtered-view.png)
+
+6. Choose the policy you want to assign
+
+- *"Network Watcher flow logs should have traffic analytics enabled"* is the audit policy that flags non-compliant flow logs, that is flow logs without traffic analytics enabled
+- *"Configure network security groups to use specific workspace for traffic analytics"* and *"Configure network security groups to enable Traffic Analytics"* are the policies with a deployment action. They enable traffic analytics on all the NSGs overwriting/not overwriting already configured settings depending on the policy enabled.
+
+There are separate instructions for each policy below.
+
+## Audit Policy
+
+### Network Watcher flow logs should have traffic analytics enabled
+
+The policy audits all existing Azure Resource Manager objects of type "Microsoft.Network/networkWatchers/flowLogs" and checks if Traffic Analytics is enabled via the "networkWatcherFlowAnalyticsConfiguration.enabled" property of the flow logs resource. It flags the flow logs resource which have the property set to false.
+
+If you want to see the full definition of the policy, you can visit the [Definitions tab](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyMenuBlade/Definitions) and search for "traffic analytics" to find the policy
+
+### Assignment
+
+1. Fill in your policy details
+
+- Scope: It can be a subscription or a resource group. In latter case, select resource group that contains flow logs resource (and not network security group)
+- Policy Definition: Should be chosen as shown in the "Locate the policies" section.
+- AssignmentName: Choose a descriptive name
+
+2. Click on "Review + Create" to review your assignment
+
+The policy does not require any parameters. As you are assigning an audit policy, you do not need to fill the details in the "Remediation" tab.
+
+![Audit Policy Review Traffic Analytics](./media/traffic-analytics/policy-one-assign.png)
+
+### Results
+
+To check the results, open the Compliance tab and search for the name of your Assignment.
+You should see something similar to the following screenshot once your policy runs. In case your policy hasn't run, wait for some time.
+
+![Audit Policy Results traffic analytics](./media/traffic-analytics/policy-one-results.png)
+
+## Deploy-If-not-exists Policy
+
+### Configure network security groups to use specific workspace for traffic analytics
+
+It flags the NSG that do not have Traffic Analytics enabled. It means that for the flagged NSG, either the corresponding flow logs resource does not exist or flow logs resource exist but traffic analytics is not enabled on it. You can create Remediation task if you want the policy to affect existing resources.
+Network Watcher is a regional service so this policy will apply to NSGs belonging to particular region only in the selected scope. (For a different region, create another policy assignment.)
+
+Remediation can be assigned while assigning policy or after policy is assigned and evaluated. Remediation will enable Traffic Analytics on all the flagged resources with the provided parameters. Note that if an NSG already has flow Logs enabled into a particular storage ID but it does not have Traffic Analytics enabled, then remediation will enable Traffic Analytics on this NSG with the provided parameters. If for the flagged NSG, the storage ID provided in the parameters is different from the one already enabled for flow logs, then the latter gets overwritten with the provided storage ID in the remediation task. If you don't want to overwrite, use policy *"Configure network security groups to enable Traffic Analytics"* described below.
+
+If you want to see the full definition of the policy, you can visit the [Definitions tab](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyMenuBlade/Definitions) and search for "Traffic Analytics" to find the policy.
+
+### Configure network security groups to enable Traffic Analytics
+
+It is same as the above policy except that during remediation, it does not overwrite flow logs settings on the flagged NSGs that have flow logs enabled but Traffic Analytics disabled with the parameter provided in the policy assignment.
+
+### Assignment
+
+1. Fill in your policy details
+
+- Scope: It can be a subscription or a resource group
+- Policy Definition: Should be chosen as shown in the "Locate the policies" section.
+- AssignmentName: Choose a descriptive name
+
+2. Add policy parameters
+
+- NSG Region: Azure regions at which the policy is targeted
+- Storage ID: Full resource ID of the storage account. This storage account should be in the same region as the NSG.
+- Network Watchers RG: Name of the resource group containing your Network Watcher resource. If you have not renamed it, you can enter 'NetworkWatcherRG' which is the default.
+- Network Watcher name: Name of the regional network watcher service. Format: NetworkWatcher_RegionName. Example: NetworkWatcher_centralus.
+- Workspace resource ID: Resource ID of the workspace where Traffic Analytics has to be enabled. Format is "/subscriptions/<SubscriptionID>/resourceGroups/<ResouceGroupName>/providers/Microsoft.Storage/storageAccounts/<StorageAccountName>"
+- WorkspaceID: Workspace guid
+- WorkspaceRegion: Region of the workspace (note that it need not be same as the region of NSG)
+- TimeInterval: Frequency at which processed logs will be pushed into workspace. Currently allowed values are 60 mins and 10 mins. Default value is 60 mins.
+- Effect: DeployIfNotExists (already assigned value)
+
+3. Add Remediation details
+
+- Check mark on *"Create Remediation task"* if you want the policy to affect existing resources
+- *"Create a Managed Identity"* should be already checked
+- Selected the same location as previous for your Managed Identity
+- You will need Contributor or Owner permissions to use this policy. If you have these permissions, you should not see any errors.
+
+4. Click on "Review + Create" to review your assignment
+You should see something similar to the following screenshot.
+
+![DINE Policy review traffic analytics](./media/traffic-analytics/policy-two-review.png)
++
+### Results
+
+To check the results, open the Compliance tab and search for the name of your Assignment.
+You should see something like following screenshot once your policy. In case your policy hasn't run, wait for some time.
+
+![DINE Policy results traffic analytics](./media/traffic-analytics/policy-two-results.png)
+
+### Remediation
+
+To manually remediate, select *"Create Remediation task"* on the compliance tab shown above
+
+![DINE Policy remediate traffic analytics](./media/traffic-analytics/policy-two-remediate.png)
++
+## Troubleshooting
+
+### Remediation task fails with "PolicyAuthorizationFailed" error code.
+
+Sample error example "The policy assignment '/subscriptions/123ds-fdf3657-fdjjjskms638/resourceGroups/DummyRG/providers/Microsoft.Authorization/policyAssignments/b67334e8770a4afc92e7a929/' resource identity does not have the necessary permissions to create deployment."
+
+In such scenarios, the assignment's managed identity must be manually granted access. Go to the appropriate subscription/resource group (containing the resources provided in the policy parameters) and grant contributor access to the managed identity create by the policy. In the above example, "b67334e8770a4afc92e7a929" has to be as the contributor.
++
+## Next steps
+
+- Learn more about [Traffic Analytics](./traffic-analytics.md)
+- Learn more about [Network Watcher](./index.yml)
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
To create and run a new scan, do the following:
5. Provide the below details:
- a. **Name**: The name of the scan
+ 1. **Name**: The name of the scan
- b. **Connect via integration runtime**: Select the configured
- self-hosted integration runtime.
+ 1. **Connect via integration runtime**: Select the configured self-hosted integration runtime.
- c. **Credential**: Select the credential to connect to your data
- source. Make sure to:
+ 1. **Credential**: Select the credential to connect to your data source. Make sure to:
- - Select Basic Authentication while creating a credential.
- - Provide the Metastore username in the User name input field
- - Store the Metastore password in the secret key.
+ - Select Basic Authentication while creating a credential.
+ - Provide the Metastore username in the User name input field
+ - Store the Metastore password in the secret key.
- To understand more on credentials, refer to the link [here](manage-credentials.md).
+ To understand more on credentials, refer to the link [here](manage-credentials.md).
- **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
+ **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
- The username and password can be accessed from the two properties as shown below
+ The username and password can be accessed from the two properties as shown below
- :::image type="content" source="media/register-scan-hive-metastore-source/databricks-credentials.png" alt-text="databricks-username-password-details" border="true":::
+ :::image type="content" source="media/register-scan-hive-metastore-source/databricks-credentials.png" alt-text="databricks-username-password-details" border="true":::
- d. **Metastore JDBC Driver Location**: Specify the path to the JDBC
- driver location on your VM where self-host integration runtime is
- running. This should be the path to valid JARs folder location.
+ 1. **Metastore JDBC Driver Location**: Specify the path to the JDBC driver location on your VM where self-host integration runtime is running. This should be the path to valid JARs folder location.
- If you are scanning Databricks, refer to the section on Databricks below.
+ If you are scanning Databricks, refer to the section on Databricks below.
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
- e. **Metastore JDBC Driver Class**: Provide the connection driver class
- name. For example,\com.microsoft.sqlserver.jdbc.SQLServerDriver.
+ 1. **Metastore JDBC Driver Class**: Provide the connection driver class name. For example,\com.microsoft.sqlserver.jdbc.SQLServerDriver.
- **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
+ **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
- The driver class can be accessed from the property as shown below.
+ The driver class can be accessed from the property as shown below.
:::image type="content" source="media/register-scan-hive-metastore-source/databricks-driver-class-name.png" alt-text="databricks-driver-class-details" border="true":::
- f. **Metastore JDBC URL**: Provide the Connection URL value and define
- connection to Metastore DB server URL. For example,
- jdbc:sqlserver://hive.database.windows.net;database=hive;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=300
+ 1. **Metastore JDBC URL**: Provide the Connection URL value and define connection to Metastore DB server URL. For example, `jdbc:sqlserver://hive.database.windows.net;database=hive;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=300`.
- **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
+ **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
+
+ The JDBC URL can be accessed from the Connection URL property as shown below.
+
+ :::image type="content" source="media/register-scan-hive-metastore-source/databricks-jdbc-connection.png" alt-text="databricks-jdbc-url-details" border="true":::
- The JDBC URL can be accessed from the Connection URL property as shown below.
- :::image type="content" source="media/register-scan-hive-metastore-source/databricks-jdbc-connection.png" alt-text="databricks-jdbc-url-details" border="true":::
+ > [!NOTE]
+ > When you copy the URL from *hive-site.xml*, be sure you remove `amp;` from the string or the scan will fail.
- To this url, append the path to the location where SSL certificate is placed on your VM. The SSL certificate can be downloaded from [here](../mysql/howto-configure-ssl.md).
+ To this URL, append the path to the location where SSL certificate is placed on your VM. The SSL certificate can be downloaded from [here](../mysql/howto-configure-ssl.md).
- So the metastore JDBC URL will be:
+ The metastore JDBC URL will be:
- jdbc:mariadb://consolidated-westus2-prod-metastore-addl-1.mysql.database.azure.com:3306/organization1829255636414785?trustServerCertificate=true&amp;useSSL=true&sslCA=D:\Drivers\SSLCert\BaltimoreCyberTrustRoot.crt.pem
+ `jdbc:mariadb://consolidated-westus2-prod-metastore-addl-1.mysql.database.azure.com:3306/organization1829255636414785?trustServerCertificate=true&amp;useSSL=true&sslCA=D:\Drivers\SSLCert\BaltimoreCyberTrustRoot.crt.pem`
- g. **Metastore database name**: Provide the Hive Metastore Database name
+ 1. **Metastore database name**: Provide the Hive Metastore Database name.
- If you are scanning Databricks, refer to the section on Databricks below.
+ If you are scanning Databricks, refer to the section on Databricks below.
- **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
+ **Databricks usage**: Navigate to your Databricks cluster -> Apps -> Launch Web Terminal. Run the cmdlet **cat /databricks/hive/conf/hive-site.xml**
- The database name can be accessed from the JDBC URL property as shown below. For Example: organization1829255636414785
- :::image type="content" source="media/register-scan-hive-metastore-source/databricks-data-base-name.png" alt-text="databricks-database-name-details" border="true":::
+ The database name can be accessed from the JDBC URL property as shown below. For Example: organization1829255636414785
+
+ :::image type="content" source="media/register-scan-hive-metastore-source/databricks-data-base-name.png" alt-text="databricks-database-name-details" border="true":::
- h. **Schema**: Specify a list of Hive schemas to import. For example,
- schema1; schema2.
+ 1. **Schema**: Specify a list of Hive schemas to import. For example, schema1; schema2.
- All user schemas are imported if that list is
- empty. All system schemas (for example, SysAdmin) and objects are
- ignored by default.
+ All user schemas are imported if that list is empty. All system schemas (for example, SysAdmin) and objects are ignored by default.
- When the list is empty, all available schemas
- are imported.
- Acceptable schema name patterns using SQL LIKE expressions syntax include using %, e.g. A%; %B; %C%; D
+ When the list is empty, all available schemas are imported. Acceptable schema name patterns using SQL LIKE expressions syntax include using %, e.g. A%; %B; %C%; D
- - start with A or
- - end with B or
- - contain C or
- - equal D
+ - start with A or
+ - end with B or
+ - contain C or
+ - equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters are not acceptable.
- i. **Maximum memory available**: Maximum memory (in GB) available on
- customer's VM to be used by scanning processes. This is dependent on
- the size of Hive Metastore database to be scanned.
- > [!Note]
- > **For scanning Databricks metastore**
- >
+ 1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Hive Metastore database to be scanned.
+
+ > [!Note]
+ > **For scanning Databricks metastore**
+ >
- :::image type="content" source="media/register-scan-hive-metastore-source/scan.png" alt-text="scan hive source" border="true":::
+ :::image type="content" source="media/register-scan-hive-metastore-source/scan.png" alt-text="scan hive source" border="true":::
6. Click on **Continue**.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
Previously updated : 06/25/2021 Last updated : 07/08/2021
The following table provides a brief description of each built-in role. Click th
> | [Storage Queue Data Message Processor](#storage-queue-data-message-processor) | Peek, retrieve, and delete a message from an Azure Storage queue. To learn which actions are required for a given data operation, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authenticate-with-azure-active-directory#permissions-for-calling-blob-and-queue-data-operations). | 8a0f0c08-91a1-4084-bc3d-661d67233fed | > | [Storage Queue Data Message Sender](#storage-queue-data-message-sender) | Add messages to an Azure Storage queue. To learn which actions are required for a given data operation, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authenticate-with-azure-active-directory#permissions-for-calling-blob-and-queue-data-operations). | c6a89b2d-59bc-44d0-9896-0f6e12d7b80a | > | [Storage Queue Data Reader](#storage-queue-data-reader) | Read and list Azure Storage queues and queue messages. To learn which actions are required for a given data operation, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authenticate-with-azure-active-directory#permissions-for-calling-blob-and-queue-data-operations). | 19e7f393-937e-4f77-808e-94535e297925 |
+> | [Storage Table Data Contributor](#storage-table-data-contributor) | Allows for read, write and delete access to Azure Storage tables and entities | 0a9a7e1f-b9d0-4cc4-a60d-0319b160aaa3 |
+> | [Storage Table Data Reader](#storage-table-data-reader) | Allows for read access to Azure Storage tables and entities | 76199698-9eea-4c19-bc75-cec21354c6b6 |
> | **Web** | | | > | [Azure Maps Data Contributor](#azure-maps-data-contributor) | Grants access to read, write, and delete access to map related data from an Azure maps account. | 8f5e0ce6-4f7b-4dcf-bddf-e6f48634a204 | > | [Azure Maps Data Reader](#azure-maps-data-reader) | Grants access to read map related data from an Azure maps account. | 423170ca-a8f6-4b0f-8487-9e4eb8f49bfa |
Read and list Azure Storage queues and queue messages. To learn which actions ar
} ```
+### Storage Table Data Contributor
+
+Allows for read, write and delete access to Azure Storage tables and entities
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/read | Query tables |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/write | Create tables |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/delete | Delete tables |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/entities/read | Query table entities |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/entities/write | Insert, merge, or replace table entities |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/entities/delete | Delete table entities |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/entities/add/action | Insert table entities |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/entities/update/action | Merge or update table entities |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows for read, write and delete access to Azure Storage tables and entities",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/0a9a7e1f-b9d0-4cc4-a60d-0319b160aaa3",
+ "name": "0a9a7e1f-b9d0-4cc4-a60d-0319b160aaa3",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Storage/storageAccounts/tableServices/tables/read",
+ "Microsoft.Storage/storageAccounts/tableServices/tables/write",
+ "Microsoft.Storage/storageAccounts/tableServices/tables/delete"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Storage/storageAccounts/tableServices/tables/entities/read",
+ "Microsoft.Storage/storageAccounts/tableServices/tables/entities/write",
+ "Microsoft.Storage/storageAccounts/tableServices/tables/entities/delete",
+ "Microsoft.Storage/storageAccounts/tableServices/tables/entities/add/action",
+ "Microsoft.Storage/storageAccounts/tableServices/tables/entities/update/action"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Storage Table Data Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Storage Table Data Reader
+
+Allows for read access to Azure Storage tables and entities
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/read | Query tables |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/tableServices/tables/entities/read | Query table entities |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows for read access to Azure Storage tables and entities",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/76199698-9eea-4c19-bc75-cec21354c6b6",
+ "name": "76199698-9eea-4c19-bc75-cec21354c6b6",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Storage/storageAccounts/tableServices/tables/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Storage/storageAccounts/tableServices/tables/entities/read"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Storage Table Data Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Web
Push quarantined images to or pull quarantined images from a container registry.
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | *none* | |
+> | [Microsoft.ContainerRegistry](resource-provider-operations.md#microsoftcontainerregistry)/registries/quarantinedArtifacts/read | Allows pull or get of the quarantined artifacts from container registry. This is similar to Microsoft.ContainerRegistry/registries/quarantine/read except that it is a data action |
+> | [Microsoft.ContainerRegistry](resource-provider-operations.md#microsoftcontainerregistry)/registries/quarantinedArtifacts/write | Allows write or update of the quarantine state of quarantined artifacts. This is similar to Microsoft.ContainerRegistry/registries/quarantine/write action except that it is a data action |
> | **NotDataActions** | | > | *none* | |
Push quarantined images to or pull quarantined images from a container registry.
"Microsoft.ContainerRegistry/registries/quarantine/write" ], "notActions": [],
- "dataActions": [],
+ "dataActions": [
+ "Microsoft.ContainerRegistry/registries/quarantinedArtifacts/read",
+ "Microsoft.ContainerRegistry/registries/quarantinedArtifacts/write"
+ ],
"notDataActions": [] } ],
View and update permissions for Security Center. Same permissions as the Securit
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/* | Create and manage security components and policies |
+> | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/* | |
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | **NotActions** | | > | *none* | |
View and update permissions for Security Center. Same permissions as the Securit
"Microsoft.Resources/deployments/*", "Microsoft.Resources/subscriptions/resourceGroups/read", "Microsoft.Security/*",
+ "Microsoft.IoTSecurity/*",
"Microsoft.Support/*" ], "notActions": [],
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 06/25/2021 Last updated : 07/08/2021
Click the resource provider name in the following table to see the list of opera
| [Microsoft.Devices](#microsoftdevices) | | [Microsoft.DeviceUpdate](#microsoftdeviceupdate) | | [Microsoft.IoTCentral](#microsoftiotcentral) |
+| [Microsoft.IoTSecurity](#microsoftiotsecurity) |
| [Microsoft.NotificationHubs](#microsoftnotificationhubs) | | [Microsoft.TimeSeriesInsights](#microsofttimeseriesinsights) | | **Mixed reality** |
Azure service: [App Service Certificates](../app-service/configure-ssl-certifica
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
-> | Microsoft.CertificateRegistration/provisionGlobalAppServicePrincipalInUserTenant/Action | Provision service principal for service app principal |
+> | Microsoft.CertificateRegistration/provisionGlobalAppServicePrincipalInUserTenant/Action | ProvisionAKSCluster service principal for service app principal |
> | Microsoft.CertificateRegistration/validateCertificateRegistrationInformation/Action | Validate certificate purchase object without submitting it | > | Microsoft.CertificateRegistration/register/action | Register the Microsoft Certificates resource provider for the subscription | > | Microsoft.CertificateRegistration/certificateOrders/Write | Add a new certificateOrder or update an existing one |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/connections/Join/Action | Joins a Connection. | > | microsoft.web/connections/confirmconsentcode/action | Confirm Connections Consent Code. | > | microsoft.web/connections/listconsentlinks/action | List Consent Links for Connections. |
+> | microsoft.web/connections/listConnectionKeys/action | Lists API Connections Keys. |
+> | microsoft.web/connections/revokeConnectionKeys/action | Revokes API Connections Keys. |
> | Microsoft.Web/customApis/Read | Get the list of Custom API. | > | Microsoft.Web/customApis/Write | Creates or updates a Custom API. | > | Microsoft.Web/customApis/Delete | Deletes a Custom API. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/slots/metricdefinitions/read | Get Web Apps Slots Metric Definitions. | > | microsoft.web/sites/slots/metrics/read | Get Web Apps Slots Metrics. | > | microsoft.web/sites/slots/migratemysql/read | Get Web Apps Slots Migrate MySql. |
+> | microsoft.web/sites/slots/networkConfig/read | Get App Service Slots Network Configuration. |
+> | microsoft.web/sites/slots/networkConfig/write | Update App Service Slots Network Configuration. |
+> | microsoft.web/sites/slots/networkConfig/delete | Delete App Service Slots Network Configuration. |
> | microsoft.web/sites/slots/networktraces/operationresults/read | Get Web Apps Slots Network Trace Operation Results. | > | microsoft.web/sites/slots/operationresults/read | Get Web Apps Slots Operation Results. | > | microsoft.web/sites/slots/operations/read | Get Web Apps Slots Operations. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/staticSites/builds/listappsettings/Action | List app settings for a Static Site Build | > | Microsoft.Web/staticSites/builds/zipdeploy/action | Deploy a Static Site Build from zipped content | > | Microsoft.Web/staticSites/builds/config/Write | Create or update app settings for a Static Site Build |
+> | Microsoft.Web/staticSites/builds/functions/Read | List the functions for a Static Site Build |
> | Microsoft.Web/staticSites/builds/userProvidedFunctionApps/Delete | Detach a User Provided Function App from a Static Site Build | > | Microsoft.Web/staticSites/builds/userProvidedFunctionApps/Read | Get Static Site Build User Provided Function Apps | > | Microsoft.Web/staticSites/builds/userProvidedFunctionApps/Write | Register a User Provided Function App with a Static Site Build |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/staticSites/userProvidedFunctionApps/Delete | Detach a User Provided Function App from a Static Site | > | Microsoft.Web/staticSites/userProvidedFunctionApps/Read | Get Static Site User Provided Function Apps | > | Microsoft.Web/staticSites/userProvidedFunctionApps/Write | Register a User Provided Function App with a Static Site |
+> | Microsoft.Web/workerApps/read | Get the properties for a Worker App |
+> | Microsoft.Web/workerApps/write | Create a Worker App or update an existing one |
+> | Microsoft.Web/workerApps/delete | Delete a Worker App |
+> | Microsoft.Web/workerApps/operationResults/read | Get the results of a Worker App operation |
## Containers
Azure service: [Azure Databricks](/azure/databricks/)
> | Microsoft.Databricks/workspaces/updateDenyAssignment/action | Update deny assignment not actions for a managed resource group of a workspace | > | Microsoft.Databricks/workspaces/refreshWorkspaces/action | Refresh a workspace with new details like URL | > | Microsoft.Databricks/workspaces/dbWorkspaces/write | Initializes the Databricks workspace (internal only) |
+> | Microsoft.Databricks/workspaces/outboundNetworkDependenciesEndpoints/read | Gets a list of egress endpoints (network endpoints of all outbound dependencies) for an Azure Databricks Workspace. The operation returns properties of each egress endpoint |
> | Microsoft.Databricks/workspaces/privateEndpointConnectionProxies/read | Get Private Endpoint Connection Proxy | > | Microsoft.Databricks/workspaces/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxies | > | Microsoft.Databricks/workspaces/privateEndpointConnectionProxies/write | Put Private Endpoint Connection Proxies |
Azure service: [Azure Data Explorer](/azure/data-explorer/)
> | Microsoft.Kusto/Clusters/PrivateLinkResources/read | Reads private link resources | > | Microsoft.Kusto/Clusters/SKUs/read | Reads a cluster SKU resource. | > | Microsoft.Kusto/Clusters/SKUs/PrivateEndpointConnectionProxyValidation/action | Validates a private endpoint connection proxy |
+> | Microsoft.Kusto/Deployments/Preflight/action | Run a Preflight operation |
> | Microsoft.Kusto/Locations/CheckNameAvailability/action | Checks resource name availability. | > | Microsoft.Kusto/Locations/GetNetworkPolicies/action | Gets Network Intent Policies | > | Microsoft.Kusto/locations/operationresults/read | Reads operations resources |
Azure service: [Power BI Embedded](/azure/power-bi-embedded/)
> | Action | Description | > | | | > | Microsoft.PowerBIDedicated/register/action | Registers Power BI Dedicated resource provider. |
+> | Microsoft.PowerBIDedicated/register/action | Registers Power BI Dedicated resource provider. |
> | Microsoft.PowerBIDedicated/autoScaleVCores/read | Retrieves the information of the specificed Power BI Auto Scale V-Core. | > | Microsoft.PowerBIDedicated/autoScaleVCores/write | Creates or updates the specified Power BI Auto Scale V-Core. | > | Microsoft.PowerBIDedicated/autoScaleVCores/delete | Deletes the Power BI Auto Scale V-Core. |
Azure service: [Power BI Embedded](/azure/power-bi-embedded/)
> | Microsoft.PowerBIDedicated/capacities/suspend/action | Suspends the Capacity. | > | Microsoft.PowerBIDedicated/capacities/resume/action | Resumes the Capacity. | > | Microsoft.PowerBIDedicated/capacities/skus/read | Retrieve available SKU information for the capacity |
-> | Microsoft.PowerBIDedicated/locations/checkNameAvailability/action | Checks that the given Power BI capacity name is valid and not in use. |
+> | Microsoft.PowerBIDedicated/locations/checkNameAvailability/action | Checks that given Power BI Dedicated resource name is valid and not in use. |
+> | Microsoft.PowerBIDedicated/locations/checkNameAvailability/action | Checks that given Power BI Dedicated resource name is valid and not in use. |
+> | Microsoft.PowerBIDedicated/locations/operationresults/read | Retrieves the information of the specified operation result. |
> | Microsoft.PowerBIDedicated/locations/operationresults/read | Retrieves the information of the specified operation result. | > | Microsoft.PowerBIDedicated/locations/operationstatuses/read | Retrieves the information of the specified operation status. |
+> | Microsoft.PowerBIDedicated/locations/operationstatuses/read | Retrieves the information of the specified operation status. |
> | Microsoft.PowerBIDedicated/operations/read | Retrieves the information of operations |
+> | Microsoft.PowerBIDedicated/operations/read | Retrieves the information of operations |
+> | Microsoft.PowerBIDedicated/servers/read | Retrieves the information of the specified Power BI Dedicated Server. |
+> | Microsoft.PowerBIDedicated/servers/write | Creates or updates the specified Power BI Dedicated Server |
+> | Microsoft.PowerBIDedicated/servers/delete | Deletes the Power BI Dedicated Server |
+> | Microsoft.PowerBIDedicated/servers/suspend/action | Suspends the Server. |
+> | Microsoft.PowerBIDedicated/servers/resume/action | Resumes the Server. |
+> | Microsoft.PowerBIDedicated/servers/skus/read | Retrieve available SKU information for the Server. |
+> | Microsoft.PowerBIDedicated/skus/read | Retrieves the information of Skus |
> | Microsoft.PowerBIDedicated/skus/read | Retrieves the information of Skus | ### Microsoft.Purview
Azure service: [Azure Synapse Analytics](../synapse-analytics/index.yml)
> | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnectionProxies/read | Reads a private endpoint connection proxy | > | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnectionProxies/write | Writes a private endpoint connection proxy | > | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnectionProxies/delete | Deletes a private endpoint connection proxy |
+> | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnections/read | Reads a private endpoint connection |
+> | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnections/write | Writes a private endpoint connection |
> | Microsoft.Synapse/workspaces/kustoPools/PrivateLinkResources/read | Reads private link resources | > | Microsoft.Synapse/workspaces/libraries/read | Read Library Artifacts | > | Microsoft.Synapse/workspaces/managedIdentitySqlControlSettings/write | Update Managed Identity SQL Control Settings on the workspace |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/webhooks/write | Create or update a web hook | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/webhooks/delete | Delete a web hook | > | Microsoft.CognitiveServices/accounts/SpeechServices/speechrest/webhooks/read | Get one or more web hooks |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/phrases/read | Retrieves list of supported passphrases for a specific locale. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/profiles/write | Create a new speaker profile with specified locale. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/profiles/delete | Deletes an existing profile. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/profiles/read | Retrieves a set of profiles or retrieves a single profile by ID. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/profiles/verify/action | Verifies existing profiles against input audio. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/profiles/enrollments/write | Adds an enrollment to existing profile. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/profiles/reset/write | Resets existing profile to its original creation state. The reset operation does the following: |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/phrases/read | Retrieves list of supported passphrases for a specific locale. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/write | Creates a new speaker profile with specified locale. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/delete | Deletes an existing profile. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/identifysinglespeaker/action | Identifies who is speaking in input audio among a list of candidate profiles. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/read | Retrieves a set of profiles or retrieves a single profile by ID. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/verify/action | Verifies existing profiles against input audio. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/enrollments/write | Adds an enrollment to existing profile. |
+> | Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/profiles/reset/write | Resets existing profile to its original creation state. The reset operation does the following: |
> | Microsoft.CognitiveServices/accounts/SpeechServices/unified-speech/frontend/action | This endpoint manages the Speech Frontend | > | Microsoft.CognitiveServices/accounts/SpeechServices/unified-speech/management/action | This endpoint manages the Speech Frontend | > | Microsoft.CognitiveServices/accounts/SpeechServices/unified-speech/probes/action | This endpoint monitors the Speech Frontend health |
Azure service: [IoT Central](../iot-central/index.yml)
> | Microsoft.IoTCentral/IoTApps/delete | Deletes an IoT Central Applications | > | Microsoft.IoTCentral/operations/read | Gets all the available operations on IoT Central Applications |
+### Microsoft.IoTSecurity
+
+Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.IoTSecurity/unregister/action | Unregisters the subscription for Azure Defender for IoT |
+> | Microsoft.IoTSecurity/register/action | Registers the subscription for Azure Defender for IoT |
+> | Microsoft.IoTSecurity/alerts/read | Gets IoT Alerts |
+> | Microsoft.IoTSecurity/defenderSettings/read | Gets IoT Defender Settings |
+> | Microsoft.IoTSecurity/defenderSettings/write | Creates or updates IoT Defender Settings |
+> | Microsoft.IoTSecurity/defenderSettings/delete | Deletes IoT Defender Settings |
+> | Microsoft.IoTSecurity/defenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information |
+> | Microsoft.IoTSecurity/defenderSettings/downloadManagerActivation/action | Download manager activation file |
+> | Microsoft.IoTSecurity/deviceGroups/read | Gets device group |
+> | Microsoft.IoTSecurity/devices/read | Get devices |
+> | Microsoft.IoTSecurity/onPremiseSensors/read | Gets on-premise IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/write | Creates or updates on-premise IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/delete | Deletes on-premise IoT Sensors |
+> | Microsoft.IoTSecurity/onPremiseSensors/downloadActivation/action | Gets on-premise IoT Sensor Activation File |
+> | Microsoft.IoTSecurity/onPremiseSensors/downloadResetPassword/action | Downloads file for reset password of the on-premise IoT Sensor |
+> | Microsoft.IoTSecurity/recommendations/read | Gets IoT Recommendations |
+> | Microsoft.IoTSecurity/sensors/read | Gets IoT Sensors |
+> | Microsoft.IoTSecurity/sensors/write | Creates or updates IoT Sensors |
+> | Microsoft.IoTSecurity/sensors/delete | Deletes IoT Sensors |
+> | Microsoft.IoTSecurity/sensors/downloadActivation/action | Downloads activation file for IoT Sensors |
+> | Microsoft.IoTSecurity/sensors/triggerTiPackageUpdate/action | Triggers threat intelligence package update |
+> | Microsoft.IoTSecurity/sensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors |
+> | Microsoft.IoTSecurity/sites/read | Gets IoT site |
+> | Microsoft.IoTSecurity/sites/write | Creates IoT site |
+> | Microsoft.IoTSecurity/sites/delete | Deletes IoT site |
+ ### Microsoft.NotificationHubs Azure service: [Notification Hubs](../notification-hubs/index.yml)
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/diagnostics/delete | Deletes the specified Diagnostic. | > | Microsoft.ApiManagement/service/eventGridFilters/write | Set Event Grid Filters | > | Microsoft.ApiManagement/service/eventGridFilters/delete | Delete Event Grid Filters |
+> | Microsoft.ApiManagement/service/eventGridFilters/read | Get Event Grid Filter |
> | Microsoft.ApiManagement/service/gateways/read | Lists a collection of gateways registered with service instance. or Gets the details of the Gateway specified by its identifier. | > | Microsoft.ApiManagement/service/gateways/write | Creates or updates an Gateway to be used in Api Management instance. or Updates the details of the gateway specified by its identifier. | > | Microsoft.ApiManagement/service/gateways/delete | Deletes specific Gateway. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/portalSettings/read | Lists a collection of portal settings. or Get Sign In Settings for the Portal or Get Sign Up Settings for the Portal or Get Delegation Settings for the Portal. | > | Microsoft.ApiManagement/service/portalSettings/write | Update Sign-In settings. or Create or Update Sign-In settings. or Update Sign Up settings or Update Sign Up settings or Update Delegation settings. or Create or Update Delegation settings. | > | Microsoft.ApiManagement/service/portalSettings/listSecrets/action | Gets validation key of portal delegation settings. or Get media content blob container uri. |
+> | Microsoft.ApiManagement/service/privateEndpointConnectionProxies/read | Get Private Endpoint Connection Proxy |
+> | Microsoft.ApiManagement/service/privateEndpointConnectionProxies/write | Create the private endpoint connection proxy |
+> | Microsoft.ApiManagement/service/privateEndpointConnectionProxies/delete | Delete the private endpoint connection proxy |
+> | Microsoft.ApiManagement/service/privateEndpointConnectionProxies/validate/action | Validate the private endpoint connection proxy |
+> | Microsoft.ApiManagement/service/privateEndpointConnectionProxies/operationresults/read | View the result of private endpoint connection operations in the management portal |
> | Microsoft.ApiManagement/service/privateEndpointConnections/read | Get Private Endpoint Connections | > | Microsoft.ApiManagement/service/privateEndpointConnections/write | Approve Or Reject Private Endpoint Connections | > | Microsoft.ApiManagement/service/privateEndpointConnections/delete | Delete Private Endpoint Connections |
security-center Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
Previously updated : 07/11/2021 Last updated : 07/12/2021
At the bottom of this page, there's a table describing the Azure Security Center
|**Digital currency mining related behavior detected**|Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining.|-|High| |**Dynamic PS script construction**|Analysis of host data on %{Compromised Host} detected a PowerShell script being constructed dynamically. Attackers sometimes use this approach of progressively building up a script in order to evade IDS systems. This could be legitimate activity, or an indication that one of your machines has been compromised.|-|Medium| |**Executable found running from a suspicious location**|Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host.|-|High|
-|**Fileless attack technique detected**|The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks.|Defense Evasion / Execution|High|
-|**Fileless attack behavior detected**|The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks.|Defense Evasion|Low|
-|**Fileless attack toolkit detected**|The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory.|Defense Evasion|Medium|
+|**Fileless attack technique detected**<br>(VM_FilelessAttackTechnique.Windows)|The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks.|Defense Evasion, Execution|High|
+|**Fileless attack behavior detected**<br>(VM_FilelessAttackBehavior.Windows)|The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks.|Defense Evasion|Low|
+|**Fileless attack toolkit detected**<br>(VM_FilelessAttackToolkit.Windows)|The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory.|Defense Evasion, Execution|Medium|
|**High risk software detected**|Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. Upon using these tools, the malware can be silently installed in the background.|-|Medium| |**Local Administrators group members were enumerated**|Machine logs indicate a successful enumeration on group %{Enumerated Group Domain Name}\%{Enumerated Group Name}. Specifically, %{Enumerating User Domain Name}\%{Enumerating User Name} remotely enumerated the members of the %{Enumerated Group Domain Name}\%{Enumerated Group Name} group. This activity could either be legitimate activity, or an indication that a machine in your organization has been compromised and used to reconnaissance %{vmname}.|-|Informational| |**Malicious SQL activity**|Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious.|-|High|
At the bottom of this page, there's a table describing the Azure Security Center
|**Suspect service installation**|Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it is a known attacker technique to compromise additional user accounts and move laterally across a network.|-|Medium| |**Suspected Kerberos Golden Ticket attack parameters observed**|Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack.|-|Medium| |**Suspicious Account Creation Detected**|Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator.|-|Medium|
-|**Suspicious Activity Detected**|Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host.|-|Medium|
+|**Suspicious Activity Detected**<br>(VM_SuspiciousActivity)|Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host.|Execution|Medium|
|**Suspicious PowerShell Activity Detected**|Analysis of host data detected a PowerShell script running on %{Compromised Host} that has features in common with known suspicious scripts. This script could either be legitimate activity, or an indication of a compromised host.|-|High| |**Suspicious PowerShell cmdlets executed**|Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets.|-|Medium| |**Suspicious SQL activity**|Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is uncommon with this account.|-|Medium|
At the bottom of this page, there's a table describing the Azure Security Center
|**Behavior similar to ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of files that have resemblance of known ransomware that can prevent users from accessing their system or personal files, and demands ransom payment in order to regain access. This behavior was seen [x] times today on the following machines: [Machine names]|-|High| |**Container with a miner image detected**|Machine logs indicate execution of a Docker container that runs an image associated with a digital currency mining. This behavior can possibly indicate that your resources are abused by an attacker.|-|High| |**Detected anomalous mix of upper and lower case characters in command line**|Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host.|-|Medium|
-|**Detected file download from a known malicious source [seen multiple times]**|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}. This behavior was seen over [x] times today on the following machines: [Machine names]|-|Medium|
+|**Detected file download from a known malicious source [seen multiple times]**<br>(VM_SuspectDownload)|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}. This behavior was seen over [x] times today on the following machines: [Machine names]|Privilege Escalation, Execution, Exfiltration, Command and Control|Medium|
|**Detected file download from a known malicious source**|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}.|-|Medium| |**Detected persistence attempt [seen multiple times]**|Analysis of host data on %{Compromised Host} has detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode, so this may indicate that an attacker has added a malicious process to every run-level to guarantee persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Detected persistence attempt**<br>(VM_NewSingleUserModeStartupScript)|Host data analysis has detected that a startup script for single-user mode has been installed.<br>Because it's rare that any legitimate process would be required to run in that mode, this might indicate that an attacker has added a malicious process to every run-level to guarantee persistence. |Persistence|Medium|
At the bottom of this page, there's a table describing the Azure Security Center
|**Detected suspicious file download**<br>(VM_SuspectDownloadArtifacts)|Analysis of host data has detected suspicious download of remote file on %{Compromised Host}.|Persistence|Low| |**Detected suspicious network activity**|Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it.|-|Low| |**Detected suspicious use of the useradd command [seen multiple times]**|Analysis of host data has detected suspicious use of the useradd command on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Detected suspicious use of the useradd command**|Analysis of host data has detected suspicious use of the useradd command on %{Compromised Host}.|-|Medium|
+|**Detected suspicious use of the useradd command**<br>(VM_SuspectUserAddition)|Analysis of host data has detected suspicious use of the useradd command on %{Compromised Host}.|Persistence|Medium|
|**Digital currency mining related behavior detected**|Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining.|-|High| |**Disabling of auditd logging [seen multiple times]**|The Linux Audit system provides a way to track security-relevant information on the system. It records as much information about the events that are happening on your system as possible. Disabling auditd logging could hamper discovering violations of security policies used on the system. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low| |**Executable found running from a suspicious location**<br>(VM_SuspectExecutablePath)|Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host.| Execution |High| |**Exploitation of Xorg vulnerability [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the user of Xorg with suspicious arguments. Attackers may use this technique in privilege escalation attempts. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Exposed Docker daemon detected**|Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port.|-|Medium|
+|**Exposed Docker daemon on TCP socket**<br>(VM_ExposedDocker)|Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port.|Execution, Exploitation|Medium|
|**Failed SSH brute force attack**<br>(VM_SshBruteForceFailed)|Failed brute force attacks were detected from the following attackers: %{Attackers}. Attackers were trying to access the host with the following user names: %{Accounts used on failed sign in to host attempts}.|Probing|Medium| |**Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection)| The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} | Execution | Medium | |**Fileless Attack Technique Detected**<br>(VM_FilelessAttackTechnique.Linux)| The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors} | Execution | High |
At the bottom of this page, there's a table describing the Azure Security Center
|**New SSH key added [seen multiple times]**<br>(VM_SshKeyAddition)|A new SSH key was added to the authorized keys file. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence|Low| |**New SSH key added**|A new SSH key was added to the authorized keys file|-|Low| |**Possible attack tool detected [seen multiple times]**|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This tool is often associated with malicious users attacking other machines in some way. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Possible attack tool detected**|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This tool is often associated with malicious users attacking other machines in some way.|-|Medium|
+|**Possible attack tool detected**<br>(VM_KnownLinuxAttackTool)|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This tool is often associated with malicious users attacking other machines in some way.| Execution, Collection, Comamand and Control, Probing |Medium|
|**Possible backdoor detected [seen multiple times]**|Analysis of host data has detected a suspicious file being downloaded then run on %{Compromised Host} in your subscription. This activity has previously been associated with installation of a backdoor. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible credential access tool detected [seen multiple times]**|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible credential access tool detected**|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials.|-|Medium|
At the bottom of this page, there's a table describing the Azure Security Center
|**Possible Log Tampering Activity Detected**|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files.|-|Medium| |**Possible loss of data detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised. This behavior was seen [x]] times today on the following machines: [Machine names]|-|Medium| |**Possible loss of data detected**<br>(VM_DataEgressArtifacts)|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised.|Collection, Exfiltration|Medium|
-|**Possible malicious web shell detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
+|**Possible malicious web shell detected [seen multiple times]**<br>(VM_Webshell)|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence, Exploitation|Medium|
|**Possible malicious web shell detected**|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation.|-|Medium| |**Possible password change using crypt-method detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected password change using crypt method. Attackers can make this change to continue access and gaining persistence after compromise. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Potential overriding of common files [seen multiple times]**|Analysis of host data has detected common executables being overwritten on %{Compromised Host}. Attackers will overwrite common files as a way to obfuscate their actions or for persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
At the bottom of this page, there's a table describing the Azure Security Center
|**Successful SSH brute force attack**<br>(VM_SshBruteForceSuccess)|Analysis of host data has detected a successful brute force attack. The IP %{Attacker source IP} was seen making multiple login attempts. Successful logins were made from that IP with the following user(s): %{Accounts used to successfully sign in to host}. This means that the host may be compromised and controlled by a malicious actor.|Exploitation|High| |**Suspicious Account Creation Detected**|Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator.|-|Medium| |**Suspicious compilation detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Suspicious compilation detected**|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges.|-|Medium|
+|**Suspicious compilation detected**<br>(VM_SuspectCompilation)|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges.|Privilege Escalation, Exploitation|Medium|
|**Suspicious kernel module detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a shared object file being loaded as a kernel module. This could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Suspicious password access [seen multiple times]**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Informational| |**Suspicious password access**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}.|-|Informational| |**Suspicious PHP execution detected**|Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run OS commands or PHP code from the command line using the PHP process. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.|-|Medium|
-|**Suspicious request to Kubernetes API**|Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container.|-|Medium|
+|**Suspicious request to Kubernetes API**<br>(VM_KubernetesAPI)|Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container.|Execution|Medium|
|||||
Azure Defender alerts for container hosts aren't limited to the alerts below. Ma
| **Login from a principal user not seen in 60 days**<br>(SQL.DB_PrincipalAnomaly<br>SQL.VM_PrincipalAnomaly<br>SQL.DW_PrincipalAnomaly<br>SQL.MI_PrincipalAnomaly) | A principal user not seen in the last 60 days has logged into your database. If this database is new or this is expected behavior caused by recent changes in the users accessing the database, Security Center will identify significant changes to the access patterns and attempt to prevent future false positives. ) | Exploitation | Medium | | **Login from a suspicious IP** | Your resource has been accessed successfully from an IP address that Microsoft Threat Intelligence has associated with suspicious activity. ) | PreAttack | Medium | | **Potential SQL Brute Force attempt** | An abnormally high number of failed sign in attempts with different credentials have occurred. In some cases, the alert detects penetration testing in action. In other cases, the alert detects a brute force attack. ) | Probing | High |
-| **Potential SQL injection**<br>(SQL.DB_PotentialSqlInjection<br>SQL.VM_PotentialSqlInjection<br>SQL.MI_PotentialSqlInjection<br>SQL.DW_PotentialSqlInjection<br>Synapse.SQLPool_PotentialSqlInjection) | An active exploit has occurred against an identified application vulnerable to SQL injection. This means an attacker is trying to inject malicious SQL statements by using the vulnerable application code or stored procedures. ) | - | High |
+| **Potential SQL injection**<br>(SQL.DB_PotentialSqlInjection<br>SQL.VM_PotentialSqlInjection<br>SQL.MI_PotentialSqlInjection<br>SQL.DW_PotentialSqlInjection<br>Synapse.SQLPool_PotentialSqlInjection) | An active exploit has occurred against an identified application vulnerable to SQL injection. This means an attacker is trying to inject malicious SQL statements by using the vulnerable application code or stored procedures. ) | PreAttack | High |
| **Potentially Unsafe Action**<br>(SQL.DB_UnsafeCommands<br>SQL.MI_UnsafeCommands<br>SQL.DW_UnsafeCommands) | A potentially unsafe action was attempted on your database '{name}' on server '{name}'. ) | - | High | | **Suspected brute force attack using a valid user** | A potential brute force attack has been detected on your resource. The attacker is using the valid user sa, which has permissions to login. ) | PreAttack | High | | **Suspected brute force attack** | A potential brute force attack has been detected on your SQL server '{name}'. ) | PreAttack | High |
security-center Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-container-registries-introduction.md
Azure Container Registry (ACR) is a managed, private Docker registry service that stores and manages your container images for Azure deployments in a central registry. It's based on the open-source Docker Registry 2.0.
-To protect the Azure Resource Manager based registries in your subscription, enable **Azure Defender for container registries** at the subscription level. Azure Defender will then scan all images when theyΓÇÖre pushed to the registry, imported into the registry, or pulled within the last 30 days. YouΓÇÖll be charged for every image that gets scanned ΓÇô once per image.
+To protect the Azure Resource Manager based registries in your subscription, enable **Azure Defender for container registries** at the subscription level. Azure Defender will then scan all images when theyΓÇÖre pushed to the registry, imported into the registry, or pulled within the last 30 days. YouΓÇÖll be charged for every image that gets scanned ΓÇô once per image.
[!INCLUDE [Defender for container registries availability info](../../includes/security-center-availability-defender-for-container-registries.md)]
security-center Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/quickstart-onboard-machines.md
Title: Connect your non-Azure machines to Azure Security Center
description: Learn how to connect your non-Azure machines to Security Center Previously updated : 11/16/2020 Last updated : 07/12/2021
Each of these is described on this page.
## Add non-Azure machines with Azure Arc
-Azure Arc enabled servers is the preferred way of adding your non-Azure machines to Azure Security Center.
+The preferred way of adding your non-Azure machines to Azure Security Center is with [Azure Arc enabled servers](../azure-arc/servers/overview.md).
-A machine with Azure Arc enabled servers becomes an Azure resource and appears in Security Center with recommendations like your other Azure resources.
+A machine with Azure Arc enabled servers becomes an Azure resource and - when you've installed the Log Analytics agent on it - appears in Security Center with recommendations like your other Azure resources.
-In addition, Azure Arc enabled servers provides enhanced capabilities such as the option to enable guest configuration policies on the machine, deploy the Log Analytics agent as an extension, simplify deployment with other Azure services, and more. For an overview of the benefits, see [Supported scenarios](../azure-arc/servers/overview.md#supported-scenarios).
+In addition, Azure Arc enabled servers provides enhanced capabilities such as the option to enable guest configuration policies on the machine, simplify deployment with other Azure services, and more. For an overview of the benefits, see [Supported scenarios](../azure-arc/servers/overview.md#supported-scenarios).
+
+> [!NOTE]
+> Security Center's auto-deploy tools for deploying the Log Analytics agent don't support machines running Azure Arc. When you've connected your machines using Azure Arc, use the relevant Security Center recommendation to deploy the agent and benefit from the full range of protections offered by Security Center:
+>
+> - [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1)
+> - [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08)
Learn more about [Azure Arc enabled servers](../azure-arc/servers/overview.md).
Learn more about [Azure Arc enabled servers](../azure-arc/servers/overview.md).
- To connect multiple machines at scale to Arc enabled servers, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md) > [!TIP]
-> If you're onboarding machines running on AWS, Security Center's connector for AWS transparently handles the Azure Arc deployment for you. Learn more in [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md).
+> If you're onboarding machines running on Amazon Web Services (AWS), Security Center's connector for AWS transparently handles the Azure Arc deployment for you. Learn more in [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md).
::: zone-end
security-center Security Center Wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
Previously updated : 07/01/2021 Last updated : 07/11/2021
Confirm that your machine meets the necessary requirements for Defender for Endp
1. Ensure the machine is connected to Azure as required: - For **Windows** servers, configure the network settings described in [Configure device proxy and Internet connectivity settings](/windows/security/threat-protection/microsoft-defender-atp/configure-proxy-internet)
- - For **on-premises** machines, connect it to Azure Arc as explained in [Connect hybrid machines with Azure Arc enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md)
- For **Windows Server 2019** and [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md) machines, confirm that your machines have the MicrosoftMonitoringAgent extension.
+ - For **on-premises** machines, connect them to Azure Arc as explained in [Connect hybrid machines with Azure Arc enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md) and then use the relevant Security Center recommendation to deploy the Log Analytics agent:
+ - [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1)
+ - [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08)
1. Enable **Azure Defender for servers**. See [Quickstart: Enable Azure Defender](enable-azure-defender.md).
-1. If you've already licensed and deployed Microsoft Defender for Endpoints on your servers, remove it using the procedure described in [Offboard Windows servers](/windows/security/threat-protection/microsoft-defender-atp/configure-server-endpoints#offboard-windows-servers).
1. If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For full details, [contact Microsoft support](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
Security Center automatically deploys the MDE.Windows extension to machines runn
- Windows 10 Virtual Desktop (WVD) - Other versions of Windows Server if Security Center doesn't recognize the OS version (for example, when a custom VM image is used). In this case, Microsoft Defender for Endpoint is still provisioned by the Log Analytics agent.
-> [!TIP]
+> [!IMPORTANT]
> If you delete the MDE.Windows extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide). ### What are the licensing requirements for Microsoft Defender for Endpoint?
sentinel Watchlists https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/watchlists.md
Title: Use Azure Sentinel watchlists
-description: This article describes how to use Azure Sentinel watchlists investigate threats, import business data, create allow lists, and enrich event data.
+description: This article describes how to use Azure Sentinel watchlists to create allowlists/blocklists, enrich event data, and assist in investigating threats.
ms.assetid: 1721d0da-c91e-4c96-82de-5c7458df566b -+ Previously updated : 09/06/2020 Last updated : 07/11/2021 # Use Azure Sentinel watchlists
-> [!IMPORTANT]
-> The watchlists feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Azure Sentinel watchlists enable the collection of data from external data sources for correlation with the events in your Azure Sentinel environment. Once created, you can use watchlists in your search, detection rules, threat hunting, and response playbooks. Watchlists are stored in your Azure Sentinel workspace as name-value pairs and are cached for optimal query performance and low latency. Common scenarios for using watchlists include: - **Investigating threats** and responding to incidents quickly with the rapid import of IP addresses, file hashes, and other data from CSV files. Once imported, you can use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries. -- **Importing business data** as a watchlist. For example, import user lists with privileged system access, or terminated employees, and then use the watchlist to create allow and deny lists used to detect or prevent those users from logging in to the network.
+- **Importing business data** as a watchlist. For example, import user lists with privileged system access, or terminated employees, and then use the watchlist to create allowlists and blocklists used to detect or prevent those users from logging in to the network.
-- **Reducing alert fatigue**. Create allow lists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the alert, and prevent benign events from becoming alerts.
+- **Reducing alert fatigue**. Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the alert, and prevent benign events from becoming alerts.
- **Enriching event data**. Use watchlists to enrich your event data with name-value combinations derived from external data sources.
+> [!NOTE]
+> - The use of watchlists should be limited to reference data, as they are not designed for large data volumes.
+>
+> - The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items do not count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead.
+>
+> - Watchlists can only be referenced from within the same workspace. Cross-workspace and/or Lighthouse scenarios are currently not supported.
+ ## Create a new watchlist 1. From the Azure portal, navigate to **Azure Sentinel** > **Configuration** > **Watchlist** and then select **+ Add new**.
Common scenarios for using watchlists include:
You will see a preview of the first 50 rows of results in the wizard screen.
-1. In the **SearchKey** field, enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains server names and their respective IP addresses, and you expect to use the IP addresses often for search or joins, use the **IP Address** column as the SearchKey.
+1. In the **SearchKey** field, enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey.
1. Select **Next: Review and Create**.
Common scenarios for using watchlists include:
```kusto Heartbeat
- | lookup kind=leftouter _GetWatchlist('IPlist')
- on $left.ComputerIP == $right.SearchKey
+ | lookup kind=leftouter _GetWatchlist('mywatchlist')
+ on $left.RemoteIPCountry == $right.SearchKey
``` :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-join.png" alt-text="queries against watchlist as lookup" lightbox="./media/watchlists/sentinel-watchlist-queries-join.png":::
To get a list of watchlist aliases, from the Azure portal, navigate to **Azure S
:::image type="content" source="./media/watchlists/sentinel-watchlist-alias.png" alt-text="list watchlists" lightbox="./media/watchlists/sentinel-watchlist-alias.png":::
+## Manage your watchlist in the Azure Sentinel portal
+
+You can also view, edit, and create new watchlist items directly from the Watchlist blade in the Azure Sentinel portal.
+
+1. To edit your watchlist, navigate to **Azure Sentinel > Configuration > Watchlist**, select the watchlist you want to edit, and select **Edit watchlist items** on the details pane.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit.png" alt-text="Screen shot showing how to edit a watchlist" lightbox="./media/watchlists/sentinel-watchlist-edit.png":::
+
+1. To edit an existing watchlist item, mark the checkbox of that watchlist item, edit the item, and select **Save**. Select **Yes** at the confirmation prompt.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-change.png" alt-text="Screen shot showing how to mark and edit a watchlist item.":::
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-confirm.png" alt-text="Screen shot confirm your changes.":::
+
+1. To add a new item to your watchlist, select **Add new** on the **Edit watchlist items** screen, fill in the fields in the **Add watchlist item** panel, and select **Add** at the bottom of that panel.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-add.png" alt-text="Screen shot showing how to add a new item to your watchlist.":::
+ ## Next steps In this document, you learned how to use watchlists in Azure Sentinel to enrich data and improve investigations. To learn more about Azure Sentinel, see the following articles:-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
- Get started [detecting threats with Azure Sentinel](./tutorial-detect-threats-built-in.md). - [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## July 2021
+- [Watchlists are in general availability](#watchlists-are-in-general-availability)
- [Support for data residency in more geos](#support-for-data-residency-in-more-geos) - [Bidirectional sync in Azure Defender connector](#bidirectional-sync-in-azure-defender-connector)
+### Watchlists are in general availability
+
+The [watchlists](watchlists.md) feature is now generally available. Use watchlists to enrich alerts with business data, to create allowlists or blocklists against which to check access events, and to help investigate threats and reduce alert fatigue.
+ ### Support for data residency in more geos Azure Sentinel now supports full data residency in the following additional geos:
service-health Resource Health Alert Arm Template Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-health/resource-health-alert-arm-template-guide.md
It's easy to configure your alert to filter for only these kinds of events:
] } ```
-Note that it is possible for the cause field to be null in some events. That is, a health transition takes place (e.g. available to unavailable) and the event is logged immediately to prevent notification delays. Therefore, using the clause above may result in an alert not being triggered, because the properties.clause property value will be set to null.
+Note that it is possible for the cause field to be null in some events. That is, a health transition takes place (e.g. available to unavailable) and the event is logged immediately to prevent notification delays. Therefore, using the clause above may result in an alert not being triggered, because the properties.cause property value will be set to null.
## Complete Resource Health alert template
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
This table summarizes support for the cache storage account used by Site Recover
| | General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is not recommended because transaction costs for V2 are substantially higher than V1 storage accounts. Premium storage | Not supported | Standard storage accounts are used for cache storage, to help optimize costs.
+Subscription | Same as source virtual machines | Cache storage account must be in the same subscription as the source virtual machine(s).
Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Do not restrict virtual network access to your storage accounts used for ASR. You should allow access from 'All networks'. The table below lists the limits in terms of number of disks that can replicate to a single storage account.
spring-cloud Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/vnet-customer-responsibilities.md
The following is a list of resource requirements for Azure Spring Cloud services
## Azure Spring Cloud network requirements
- | Destination Endpoint | Port | Use | Note |
- |||||
- | *:1194 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | |
- | *:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Cloud Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
- | *:9000 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:9000 | TCP:9000 | Underlying Kubernetes Cluster management. |
- | *:123 *Or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
- | *.azure.io:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
- | *.core.windows.net:443 and *.core.windows.net:445 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure File Storage | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
- | *.servicebus.windows.net:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hub. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
-
+| Destination Endpoint | Port | Use | Note |
+| | - | -- | |
+| *:1194 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | |
+| *:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Cloud Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
+| *:9000 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:9000 | TCP:9000 | Underlying Kubernetes Cluster management. | |
+| *:123 *Or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
+| *.azure.io:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| *.core.windows.net:443 and *.core.windows.net:445 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure File Storage | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| *.servicebus.windows.net:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hub. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+ ## Azure Spring Cloud FQDN requirements/application rules Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
- | Destination FQDN | Port | Use |
- ||||
- | *.azmk8s.io | HTTPS:443 | Underlying Kubernetes Cluster management. |
- | <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |
- | *.cdn.mscr.io | HTTPS:443 | MCR storage backed by the Azure CDN. |
- | *.data.mcr.microsoft.com | HTTPS:443 | MCR storage backed by the Azure CDN. |
- | <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
- | <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
- | <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
- |<i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. |
- | <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI.ΓÇï |
- | *mscrl.microsoft.com* | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
- | *crl.microsoft.com* | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
- | *crl3.digicert.com* | HTTPS:80 | 3rd Party SSL Certificate Chain Paths. |
-
+| Destination FQDN | Port | Use |
+| | | |
+| *.azmk8s.io | HTTPS:443 | Underlying Kubernetes Cluster management. |
+| <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |
+| *.cdn.mscr.io | HTTPS:443 | MCR storage backed by the Azure CDN. |
+| *.data.mcr.microsoft.com | HTTPS:443 | MCR storage backed by the Azure CDN. |
+| <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
+| <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+| <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+| <i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. |
+| <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI. |
+| *mscrl.microsoft.com* | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
+| *crl.microsoft.com* | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
+| *crl3.digicert.com* | HTTPS:80 | 3rd Party SSL Certificate Chain Paths. |
+ ## Azure Spring Cloud optional FQDN for third-party application performance management Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
- | Destination FQDN | Port | Use |
- | - | - | |
- | collector*.newrelic.com | TCP:443/80 | Required networks of New Relic APM agents from US region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
- | collector*.eu01.nr-data.net | TCP:443/80 | Required networks of New Relic APM agents from EU region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
+| Destination FQDN | Port | Use |
+| | - | |
+| collector*.newrelic.com | TCP:443/80 | Required networks of New Relic APM agents from US region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
+| collector*.eu01.nr-data.net | TCP:443/80 | Required networks of New Relic APM agents from EU region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
+| *.live.dynatrace.com | TCP:443 | Required network of Dynatrace APM agents. |
+| *.live.ruxit.com | TCP:443 | Required network of Dynatrace APM agents. |
## See also * [Access your application in a private network](access-app-virtual-network.md)
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-automl.md
Previously updated : 11/20/2020 Last updated : 7/9/2021
If you don't have an Azure subscription, [create a free account before you begin
## Prerequisites - An [Azure Synapse Analytics workspace](../get-started-create-workspace.md). Ensure that it has an Azure Data Lake Storage Gen2 storage account configured as the default storage. For the Data Lake Storage Gen2 file system that you work with, ensure that you're the *Storage Blob Data Contributor*.-- An Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a dedicated SQL pool by using Synapse Studio](../quickstart-create-sql-pool-studio.md).
+- An Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
- An Azure Machine Learning linked service in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse Analytics](quickstart-integrate-azure-machine-learning.md). ## Sign in to the Azure portal
For this tutorial, you need a Spark table. The following notebook creates one:
To open the wizard:
-1. Right-click the Spark table that you created in the previous step. Then select **Machine Learning** > **Train a new model**.
-![Screenshot of the Spark table, with Machine Learning and Train a new model highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00d.png)
+1. Right-click the Spark table that you created in the previous step. Then select **Machine Learning** > **Enrich with new model**.
+![Screenshot of the Spark table, with Machine Learning and Enrich with new model highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00d.png)
1. Provide configuration details for creating an automated machine learning experiment run in Azure Machine Learning. This run trains multiple models. The best model from a successful run is registered in the Azure Machine Learning model registry.
After you've successfully submitted the run, you see a link to the experiment ru
- [Tutorial: Machine learning model scoring wizard (preview) for dedicated SQL pools](tutorial-sql-pool-model-scoring-wizard.md) - [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse Analytics](quickstart-integrate-azure-machine-learning.md)-- [Machine learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
+- [Machine learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
You need a Spark table for this tutorial.
## Open the Cognitive Services wizard
-1. Right-click the Spark table created in the previous step. Select **Machine Learning** > **Predict with a model** to open the wizard.
+1. Right-click the Spark table created in the previous step. Select **Machine Learning** > **Enrich with existing model** to open the wizard.
- ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00g2.png)
+ ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00g.png)
2. A configuration panel appears, and you're asked to select a Cognitive Services model. Select **Anomaly Detector**.
- ![Screenshot that shows selection of Anomaly Detector as a model.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00c2.png)
+ ![Screenshot that shows selection of Anomaly Detector as a model.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00c.png)
## Provide authentication details
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
You'll need a Spark table for this tutorial.
## Open the Cognitive Services wizard
-1. Right-click the Spark table created in the previous procedure. Select **Machine Learning** > **Predict with a model** to open the wizard.
+1. Right-click the Spark table created in the previous procedure. Select **Machine Learning** > **Enrich with existing model** to open the wizard.
- ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00d2.png)
+ ![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00d.png)
2. A configuration panel appears, and you're asked to select a Cognitive Services model. Select **Text analytics - Sentiment Analysis**.
- ![Screenshot that shows selection of a Cognitive Services model.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00e2.png)
+ ![Screenshot that shows selection of a Cognitive Services model.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00e.png)
## Provide authentication details
synapse-analytics Tutorial Sql Pool Model Scoring Wizard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard.md
Before you run all cells in the notebook, check that the compute instance is run
![Load data to dedicated SQL pool](media/tutorial-sql-pool-model-scoring-wizard/tutorial-sql-scoring-wizard-00b.png)
-1. Go to **Data** > **Workspace**. Open the SQL scoring wizard by right-clicking the dedicated SQL pool table. Select **Machine Learning** > **Predict with a model**.
+1. Go to **Data** > **Workspace**. Open the SQL scoring wizard by right-clicking the dedicated SQL pool table. Select **Machine Learning** > **Enrich with existing model**.
> [!NOTE] > The machine learning option does not appear unless you have a linked service created for Azure Machine Learning. (See [Prerequisites](#prerequisites) at the beginning of this tutorial.)
synapse-analytics Sql Data Warehouse Partner Browse Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-browse-partners.md
+
+ Title: Discover third-party solutions from Azure Synapse partners through Synapse Studio
+description: Learn how to discover new third-party solutions that are tightly integrated with Azure Synapse partners
+++++ Last updated : 07/14/2021++++++
+# Discover partner solutions through Synapse Studio
+
+Synapse Studio allows the discovery of solution partners that extend the capabilities of Azure Synapse. From data connectors, through data wrangling tools, orchestration engines, and other workloads, the **browse partners** page serves as a hub for discovering third-party ISV applications and solutions verified to work with Azure Synapse Analytics. Synapse Studio simplifies getting started with these partners, in many cases with automated setup of the initial connection to the partner platform.
+
+## Participating partners
+The following table lists partner solutions that are currently supported. Make sure you check back often as we add new partners to this list.
+
+| Partner | Solution name |
+| - | - |
+| ![Incorta](./media/sql-data-warehouse-partner-data-integration/incorta-logo.png) | Incorta Intelligent Ingest for Azure Synapse |
+| ![Informatica](./media/sql-data-warehouse-partner-data-integration/informatica_logo.png) | Informatica Intelligent Data Management Cloud |
+| ![Qlik Data Integration (formerly Attunity)](./media/sql-data-warehouse-partner-business-intelligence/qlik_logo.png) | Qlik Data Integration (formerly Attunity) |
+
+## Requirements
+When you chose a partner application, Azure Synapse Studio provisions a sandbox environment you can use for this trial, ensuring you can experiment with partner solutions quickly before you decide to use it with your production data. The following objects are created:
+
+| Object | Details |
+| -- | - |
+| A [dedicated SQL pool](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is) named **Partner_[PartnerName]_pool** | DW100c performance level. |
+| A [SQL login](/sql/relational-databases/security/authentication-access/principals-database-engine#sa-login) named **Partner_[PartnerName]_login** | Created on your `master` database. The password for this SQL login is specified by you at the creation of your trial.|
+| A [database user](/azure/azure-sql/database/logins-create-manage?bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) | A new database user, mapped to the new SQL login. This user is added to the db_owner role for the newly created database. |
+
+In all cases, **[PartnerName]** is the name of the third-party ISV who offers the trial.
+
+### Security
+After the required objects are created, Synapse Studio sends information about your new sandbox environment to the partner application, allowing a customized trial experience. The following information is sent to our partners:
+- First name
+- Last name
+- E-mail address
+- Details about the Synapse environment required to establish a connection:
+ - DNS name of your Synapse Workspace (server name)
+ - Name of the SQL pool (database)
+ - SQL login (username only)
+
+We never share any passwords with the partner application, including the password of the newly created SQL login. You'll be required to type your password in the partner application.
+
+### Costs
+The dedicated SQL pool that is created for your partner trial incurs ongoing costs, which are based on the number of DWU blocks and hours running. Make sure you pause the SQL pool created for this partner trial when it isn't in use, to avoid unnecessary charges.
+
+## Starting a new partner trial
+
+1) On the Synapse Studio home page, under **Discover more**, select **browse partners**.
+2) The Browse partners page shows all partners currently offering trials that allow direct connectivity with Azure Synapse. Choose a partner solution.
+3) The partner details page shows you relevant information about this application and links to learn more about their solution. When you're ready to start a trial, select **Connect to partner**.
+4) In the **Connect to [PartnerName] Solution** page, note requirements of this partner connection. Change the SQL pool name and SQL login parameters if wanted (or accept the defaults), type the password of your new SQL login, and select **Connect**.
+
+The required objects will be created for your partner trial. You'll then be forwarded to a partner page to provide additional information (if needed) and to start your trial.
+
+> [!NOTE]
+> Microsoft doesn't control the partner trial experience. Partners offer product trials on their own terms and the experience, trial availability, and features may vary depending on the partner. Microsoft does not offer support to third-party applications offered in Synapse Studio.
+
+## Next steps
+
+To learn more about some of our other partners, see [Data Integration partners](sql-data-warehouse-partner-data-integration.md), [Data Management partners](sql-data-warehouse-partner-data-management.md), and [Machine Learning and AI partners](sql-data-warehouse-partner-machine-learning-ai.md).
synapse-analytics Sql Data Warehouse Partner Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-business-intelligence.md
Title: Business Intelligence partners description: Lists of third-party business intelligence partners with solutions that support Azure Synapse Analytics. - Previously updated : 03/27/2019 Last updated : 07/09/2021+
To create your data warehouse solution, you can choose from different kinds of i
## Our business intelligence partners | Partner | Description | Website/Product link | | - | -- | -- |
-| ![AtScale](./media/sql-data-warehouse-partner-business-intelligence/atscale-logo.png) |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScaleΓÇÖs Cloud OLAP, Autonomous Data EngineeringΓäó, and Universal Semantic LayerΓäó powers business intelligence results for faster, more accurate business decisions. |[Product page](https://www.atscale.com/partners/microsoft/)<br> |
+| ![AtScale](./media/sql-data-warehouse-partner-business-intelligence/atscale-logo.png) |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScale's Cloud OLAP, Autonomous Data Engineering&trade;, and Universal Semantic Layer&trade; powers business intelligence results for faster, more accurate business decisions. |[Product page](https://www.atscale.com/partners/microsoft/)<br> |
| ![Birst](./media/sql-data-warehouse-partner-business-intelligence/birst_logo.png) |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Product page](https://www.birst.com/)<br> |
-| ![Count](./media/sql-data-warehouse-partner-business-intelligence/count-logo.png) |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At CountΓÇÖs core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few clicks. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Product page](https://count.co/)<br>|
-| ![Dremio](./media/sql-data-warehouse-partner-business-intelligence/dremio-logo.png) |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using DremioΓÇÖs intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Product page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
+| ![Count](./media/sql-data-warehouse-partner-business-intelligence/count-logo.png) |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few clicks. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Product page](https://count.co/)<br>|
+| ![Dremio](./media/sql-data-warehouse-partner-business-intelligence/dremio-logo.png) |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Product page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
| ![Dundas](./media/sql-data-warehouse-partner-business-intelligence/dundas_software_logo.png) |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Product page](https://www.dundas.com/dundas-bi)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dundas.dundas-bi)<br> | | ![IBM Cognos](./media/sql-data-warehouse-partner-business-intelligence/cognos_analytics_logo.png) |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[Product page](https://www.ibm.com/products/cognos-analytics)<br>| | ![Information Builders](./media/sql-data-warehouse-partner-business-intelligence/informationbuilders_logo.png) |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Product page](https://www.informationbuilders.com/products/bi-and-analytics-platform)<br> |
To create your data warehouse solution, you can choose from different kinds of i
## Next Steps To learn more about some of our other partners, see [Data Integration partners](sql-data-warehouse-partner-data-integration.md), [Data Management partners](sql-data-warehouse-partner-data-management.md), and [Machine Learning and AI partners](sql-data-warehouse-partner-machine-learning-ai.md).+
+See how to [discover partner solutions through Synapse Studio](sql-data-warehouse-partner-browse-partners.md).
+
synapse-analytics Sql Data Warehouse Partner Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| Partner | Description | Website/Product link | | - | -- | -- |
-| ![Ab Initio](./media/sql-data-warehouse-partner-data-integration/abinitio-logo.png) |**Ab Initio**<br> Ab InitioΓÇÖs agile digital engineering platform helps you solve the toughest data processing and data management problems in corporate computing. Ab InitioΓÇÖs cloud-native platform lets you access and use data anywhere in your corporate ecosystem, whether in Azure or on-premises, including data stored on legacy systems. The combination of an intuitive interface with powerful automation, data quality, data governance, and active metadata capabilities enables rapid development and true data self-service, freeing analysts to do their jobs quickly and effectively. Join the worldΓÇÖs largest businesses in using Ab Initio to turn big data into meaningful data. |[Product page](https://www.abinitio.com/) |
-| ![Aecorsoft](./media/sql-data-warehouse-partner-data-integration/aecorsoft-logo.png) |**Aecorsoft**<br> AecorSoft offers fast, scalable, and real-time ELT/ETL software solution to help SAP customers bring complex SAP data to Azure Synapse Analytics and Azure data platform. With full compliance with SAP application layer security, AecorSoft solution is officially SAP Premium Certified to integrate with SAP applications. AecorSoftΓÇÖs unique Super Delta and Change-Data-Capture features enable SAP users to stream delta data from SAP transparent, pool, and cluster tables to Azure in CSV, Parquet, Avro, ORC, or GZIP format. Besides SAP tabular data, many other business-rule-heavy SAP objects like BW queries and S/4HANA CDS Views are fully supported. |[Product page](https://www.aecorsoft.com/products/dataintegrator)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aecorsoftinc1588038796343.aecorsoftintegrationservice_adf)<br>|
+| ![Ab Initio](./media/sql-data-warehouse-partner-data-integration/abinitio-logo.png) |**Ab Initio**<br> Ab Initio's agile digital engineering platform helps you solve the toughest data processing and data management problems in corporate computing. Ab Initio's cloud-native platform lets you access and use data anywhere in your corporate ecosystem, whether in Azure or on-premises, including data stored on legacy systems. The combination of an intuitive interface with powerful automation, data quality, data governance, and active metadata capabilities enables rapid development and true data self-service, freeing analysts to do their jobs quickly and effectively. Join the world's largest businesses in using Ab Initio to turn big data into meaningful data. |[Product page](https://www.abinitio.com/) |
+| ![Aecorsoft](./media/sql-data-warehouse-partner-data-integration/aecorsoft-logo.png) |**Aecorsoft**<br> AecorSoft offers fast, scalable, and real-time ELT/ETL software solution to help SAP customers bring complex SAP data to Azure Synapse Analytics and Azure data platform. With full compliance with SAP application layer security, AecorSoft solution is officially SAP Premium Certified to integrate with SAP applications. AecorSoft's unique Super Delta and Change-Data-Capture features enable SAP users to stream delta data from SAP transparent, pool, and cluster tables to Azure in CSV, Parquet, Avro, ORC, or GZIP format. Besides SAP tabular data, many other business-rule-heavy SAP objects like BW queries and S/4HANA CDS Views are fully supported. |[Product page](https://www.aecorsoft.com/products/dataintegrator)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aecorsoftinc1588038796343.aecorsoftintegrationservice_adf)<br>|
| ![Alooma](./media/sql-data-warehouse-partner-data-integration/alooma_logo.png) |**Alooma**<br> Alooma is an Extract, Transform, and Load (ETL) solution that enables data teams to integrate, enrich, and stream data from various data silos to an Azure Synapse data warehouse all in real time. |[Product page](https://www.alooma.com/) | | ![Alteryx](./media/sql-data-warehouse-partner-data-integration/alteryx_logo.png) |**Alteryx**<br> Alteryx Designer provides a repeatable workflow for self-service data analytics that leads to deeper insights in hours, not the weeks typical of traditional approaches! Alteryx Designer helps data analysts by combining data preparation, data blending, and analytics ΓÇô predictive, statistical, and spatial ΓÇô using the same intuitive user interface. |[Product page](https://www.alteryx.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/alteryx.alteryx-designer)<br>| | ![BI Builders (Xpert BI)](./media/sql-data-warehouse-partner-data-integration/bibuilders-logo.png) |**BI Builders (Xpert BI)**<br> Xpert BI helps organizations build and maintain a robust and scalable data platform in Azure faster through metadata-based automation. It extends Azure Synapse with best practices and DataOps, for agile data development with built-in data governance functionalities. Use Xpert BI to quickly test out and switch between different Azure solutions such as Azure Synapse, Azure Data Lake Storage, and Azure SQL Database, as your business and analytics needs changes and grows.|[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>|
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Dimodelo](./media/sql-data-warehouse-partner-data-integration/dimodelo-logo.png) |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-click deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Product page](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> | | ![Fivetran](./media/sql-data-warehouse-partner-data-integration/fivetran_logo.png) |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Product page](https://fivetran.com/)<br> | | ![HVR](./media/sql-data-warehouse-partner-data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hvr.hvr-for-azure?tab=Overview)<br>|
-| ![Incorta](./media/sql-data-warehouse-partner-data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and IncortaΓÇÖs Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta?tab=Overview)<br>|
+| ![Incorta](./media/sql-data-warehouse-partner-data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and Incorta's Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta?tab=Overview)<br>|
| ![Informatica](./media/sql-data-warehouse-partner-data-integration/informatica_logo.png) |**1.Informatica Cloud Services for Azure**<br> Informatica Cloud offers a best-in-class solution for self-service data migration, integration, and management capabilities. Customers can quickly and reliably import, and export petabytes of data to Azure from different kinds of sources. Informatica Cloud Services for Azure provides native, high volume, high-performance connectivity to Azure Synapse, SQL Database, Blob Storage, Data Lake Store, and Azure Cosmos DB. <br><br> **2.Informatica PowerCenter** PowerCenter is a metadata-driven data integration platform that jumpstarts and accelerates data integration projects to deliver data to the business more quickly than manual hand coding. It serves as the foundation for your data integration investments |**Informatica Cloud services for Azure**<br>[Product page](https://www.informatica.com/products/cloud-integration.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.iics-winter)<br><br> **Informatica PowerCenter**<br>[Product page](https://www.informatica.com/products/data-integration/powercenter.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.powercenter/)<br>| | ![Information Builders](./medim) | | ![Loome](./media/sql-data-warehouse-partner-data-integration/loome-logo.png) |**Loome**<br>Loome provides a unique governance workbench that seamlessly integrates with Azure Synapse. It allows you to quickly onboard your data to the cloud and load your entire data source into ADLS in Parquet format. You can orchestrate data pipelines across data engineering, data science and HPC workloads, including native integration with Azure Data Factory, Python, SQL, Synapse Spark, and Databricks. Loome allows you to easily monitor Data Quality exceptions reinforcing Synapse as your strategic Data Quality Hub. Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real time.| [Product page](https://www.loomesoftware.com)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bizdataptyltd1592265042221.loome?tab=Overview) | | ![Lyftron](./media/sql-data-warehouse-partner-data-integration/lyftron-logo.png) |**Lyftron**<br>Lyftron modern data hub combines an effortless data hub with agile access to data sources. Lyftron eliminates traditional ETL/ELT bottlenecks with automatic data pipeline and make data instantly accessible to BI user with the modern cloud compute of Azure Synapse, Spark & Snowflake. Lyftron connectors automatically convert any source into normalized, ready-to-query relational format and replication. It offers advanced security, data governance and transformation, with simple ANSI SQL along with search capability on your enterprise data catalog.| [Product page](https://lyftron.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/lyftron.lyftronapp?tab=Overview) | | ![Matillion](./media/sql-data-warehouse-partner-data-integration/matillion-logo.png) |**Matillion**<br>Matillion is data transformation software for cloud data warehouses. Only Matillion is purpose-built for Azure Synapse enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Matillion products are highly rated and trusted by companies of all sizes to meet their data integration and transformation needs. Learn more about how you can unlock the potential of your data with Matillion's cloud-based approach to data transformation.| [Product page](https://www.matillion.com/technology/cloud-data-warehouse/microsoft-azure-synapse/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/matillion.matillion-etl-azure-synapse?tab=Overview) |
-| ![oh22 HEDDA.IO](./media/sql-data-warehouse-partner-data-integration/heddaiowhitebg-logo.png) |**oh22 HEDDA<span></span>.IO**<br>oh22ΓÇÖs HEDDA<span></span>.IO is a knowledge-driven data quality product built for Microsoft Azure. It enables you to build a knowledge base and use it to perform various critical data quality tasks, including correction, enrichment, and standardization of your data. HEDDA<span></span>.IO also allows you to do data cleansing by using cloud-based reference data services provided by reference data providers or developed and provided by you.| [Product page](https://github.com/oh22is/HEDDA.IO)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/oh22.hedda-io) |
+| ![oh22 HEDDA.IO](./media/sql-data-warehouse-partner-data-integration/heddaiowhitebg-logo.png) |**oh22 HEDDA<span></span>.IO**<br>oh22's HEDDA<span></span>.IO is a knowledge-driven data quality product built for Microsoft Azure. It enables you to build a knowledge base and use it to perform various critical data quality tasks, including correction, enrichment, and standardization of your data. HEDDA<span></span>.IO also allows you to do data cleansing by using cloud-based reference data services provided by reference data providers or developed and provided by you.| [Product page](https://github.com/oh22is/HEDDA.IO)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/oh22.hedda-io) |
| ![Precisely](./media/sql-data-warehouse-partner-data-integration/precisely-logo.png) |**Precisely**<br>Precisely Connect ETL enables extract transfer and load (ETL) of data from multiple sources to Azure targets. Connect ETL is an easy to configure tool that doesn't require coding or tuning. ETL transformation can be done on the fly. It eliminates the need for costly database staging areas or manual pushes, allowing you to create your own data blends with consistent sustainable performance. Import legacy data from multiple sources including mainframe DB2, VSAM, IMS, Oracle, SQL Server, Teradata, and write them to cloud targets including Azure Databricks, Azure Synapse Analytics, and Azure Data Lake Storage. By using the high performance Connect ETL engine, you can expect optimal performance and consistency.|[Product page](https://www.precisely.com/solution/microsoft-azure)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/syncsort.dmx) | | ![Qlik Data Integration](./media/sql-data-warehouse-partner-business-intelligence/qlik_logo.png) |**Qlik Data Integration**<br>Qlik Data Integration provides an automated solution for loading data into an Azure Synapse. It simplifies batch loading and incremental replication of data from many sources: SQL Server, Oracle, DB2, Sybase, MySQL, and more. |[Product page](https://www.qlik.com/us/products/data-integration-products)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform) <br> | | ![Qubole](./media/sql-data-warehouse-partner-data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qubole-inc.qubole-data-service?tab=Overview) |
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![TimeXtender](./media/sql-data-warehouse-partner-data-integration/timextender-logo.png) |**TimeXtender**<br>TimeXtender's Discovery Hub helps companies build a modern data estate by providing an integrated data management platform that accelerates time to data insights by up to 10 times. Going beyond everyday ETL and ELT, it provides capabilities for data access, data modeling, and compliance in a single platform. Discovery Hub provides a cohesive data fabric for cloud scale analytics. It allows you to connect and integrate various data silos, catalog, model, move, and document data for analytics and AI. | [Product page](https://www.timextender.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=timextender&page=1) | | ![Trifacta](./media/sql-data-warehouse-partner-data-integration/trifacta_logo.png) |**Trifacta Wrangler**<br> Trifacta helps individuals and organizations explore, and join together diverse data for analysis. Trifacta Wrangler is designed to handle data wrangling workloads that need to support data at scale and a large number of end users.|[Product page](https://www.trifacta.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trifactainc1587522950142.trifactaazure?tab=Overview) | | ![WhereScape](./media/sql-data-warehouse-partner-data-integration/wherescape_logo.png) |**Wherescape RED**<br> WhereScape RED is an IDE that provides teams with automation tools to streamline ETL workflows. The IDE provides best practice, optimized native code for popular data targets. Use WhereScape RED to cut the time to develop, deploy, and operate your data infrastructure.|[Product page](https://www.wherescape.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wherescapesoftware.wherescape-red?source=datamarket&tab=Overview) |
-| ![Xplenty](./media/sql-data-warehouse-partner-data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. XplentyΓÇÖs point & select, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via Rest API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
+| ![Xplenty](./media/sql-data-warehouse-partner-data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. Xplenty's point & select, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via Rest API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
## Next steps To learn more about other partners, see [Business Intelligence partners](sql-data-warehouse-partner-business-intelligence.md), [Data Management partners](sql-data-warehouse-partner-data-management.md), and [Machine Learning and AI partners](sql-data-warehouse-partner-machine-learning-ai.md).+
+See how to [discover partner solutions through Synapse Studio](sql-data-warehouse-partner-browse-partners.md).
+
synapse-analytics Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-user-defined-schemas.md
FROM [edw].customer
> [!NOTE] > Any change in schema strategy requires a review of the security model for the database. In many cases, you might be able to simplify the security model by assigning permissions at the schema level.
-If more granular permissions are required, you can use database roles. For more information about database roles, see the [Manage database roles and users](../../analysis-services/analysis-services-database-users.md) article.
+If more granular permissions are required, you can use database roles. For more information about database roles, see the [Manage database roles and users](https://docs.microsoft.com/sql/relational-databases/security/authentication-access/database-level-roles) article.
## Next steps
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
If your query fails with the error message 'This query can't be executed due to
- Visit [performance best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md) to optimize query.
+### Could not allocate tempdb space while transferring data from one distribution to another
+
+This error is special case of the generic [query fails because it cannot be executed due to current resource constraints](#query-fails-because-it-cannot-be-executed-due-to-current-resource-constraints) error. This error is returned when the resources allocated to the `tempdb` database are insufficient to run the query.
+
+Apply the same mitigation and the best practices before you file a support ticket.
+ ### Query fails with error while handling an external file. If your query fails with the error message 'error handling external file: Max errors count reached', it means that there is a mismatch of a specified column type and the data that needs to be loaded.
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/create-host-pools-azure-marketplace.md
To register the desktop app group to a workspace:
- A workspace, if you chose to create it. - If you chose to register the desktop app group, the registration will be completed. - Virtual machines, if you chose to create them, which are joined to the domain and registered with the new host pool.
- - A download link for an Azure Resource Management template based on your configuration.
+ - A download link for an Azure Resource Manager template based on your configuration.
After that, you're all done!
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/create-host-pools-powershell.md
You can create a virtual machine in multiple ways:
- [Create a virtual machine from an Azure Gallery image](../virtual-machines/windows/quick-create-portal.md#create-virtual-machine) - [Create a virtual machine from a managed image](../virtual-machines/windows/create-vm-generalized-managed.md)-- [Create a virtual machine from an unmanaged image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-user-image-data-disks) >[!NOTE] >If you're deploying a virtual machine using Windows 7 as the host OS, the creation and deployment process will be a little different. For more details, see [Deploy a Windows 7 virtual machine on Azure Virtual Desktop](./virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md).
virtual-machines Disks Pools Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-deploy.md
+
+ Title: Deploy an Azure disk pool (preview)
+description: Learn how to deploy an Azure disk pool.
+++ Last updated : 07/13/2021+++
+# Deploy a disk pool (preview)
+
+This article covers how to deploy and configure an Azure disk pool (preview). Before deploying a disk pool, read the [conceptual](disks-pools.md) and [planning](disks-pools-planning.md) articles.
+
+In order for a disk pool to work correctly, you must complete the following steps:
+- Register your subscription for the preview.
+- Delegate a subnet to your disk pool.
+- Assign the resource provider of disk pool role-based access control (RBAC) permissions for managing your disk resources.
+- Create the disk pool.
+ - Add disks to your disk pool.
++
+## Prerequisites
+
+In order to successfully deploy a disk pool, you must have:
+
+- A set of managed disks you want to add to a disk pool.
+- A virtual network with a dedicated subnet deployed for your disk pool.
+
+If you're going to use the Azure PowerShell module, install [version 6.1.0 or newer](/powershell/module/az.diskpool/?view=azps-6.1.0&preserve-view=true).
+
+If you're going to use the Azure CLI, install [the latest version](/cli/azure/disk-pool?view=azure-cli-latest).
+
+## Register your subscription for the preview
+
+Register your subscription to the **Microsoft.StoragePool** provider, to be able to create and use disk pools.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. On the Azure portal menu, search for and select **Subscriptions**.
+1. Select the subscription you want to use for disk pools.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. Find the resource provider **Microsoft.StoragePool** and select **Register**.
+
+Once your subscription has been registered, you can deploy a disk pool.
+
+## Get started
+
+### Delegate subnet permission
+
+In order for your disk pool to work with your client machines, you must delegate a subnet to your Azure disk pool. When creating a disk pool, you specify a virtual network and the delegated subnet. You may either create a new subnet or use an existing one and delegate to the **Microsoft.StoragePool/diskPools** resource provider.
+
+1. Go to the virtual networks blade in the Azure portal and select the virtual network to use for the disk pool.
+1. Select **Subnets** from the virtual network blade and select **+Subnet**.
+1. Create a new subnet by completing the following required fields in the Add Subnet page:
+ - Subnet delegation: Select Microsoft.StoragePool
+
+For more information on subnet delegation, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md)
+
+### Provide StoragePool resource provider permission to the disks that will be added to the disk pool.
+
+For a disk to be able to be used in a disk pool, it must meet the following requirements:
+
+- The **StoragePool** resource provider must have been assigned an RBAC role that contains Read & Write permissions for every managed disk in the disk pool.
+- Must be either a premium SSD or an ultra disk in the same availability zone as the disk pool.
+ - For ultra disks, it must have a disk sector size of 512 bytes.
+- Must be a shared disk with a maxShares value of two or greater.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select either the resource group that contains the disks or each disk themselves.
+1. Select **Access control (IAM)**.
+1. Select **Add role assignment (Preview)**, and select **Virtual Machine Contributor** in the role list.
+
+ If you prefer, you may create your own custom role instead. A custom role for disk pools must have the following RBAC permissions to function: **Microsoft.Compute/disks/write** and **Microsoft.Compute/disks/read**.
+
+1. Select User, group, or service principal in the Assign access to list.
+1. In the Select section, search for **StoragePool Resource Provider**, select it, and save.
+
+### Create a disk pool
+For the optimal performance, we suggest you to deploy the disk pool in the same Availability Zone of your clients. If you are deploying a disk pool for an Azure VMware Solution cloud and need guidance on identifying the Availability Zone, fill in this [form](https://aka.ms/DiskPoolCollocate).
+
+# [Portal](#tab/azure-portal)
+
+1. Search for and select **Disk pool**.
+1. Select **+Add** to create a new disk pool.
+1. Fill in the details requested, select the same region and availability zone as the clients that will use the disk pool.
+1. Select the subnet that has been delegated to the **StoragePool** resource provider, and its associated virtual network.
+1. Select **Next** to add disks to your disk pool.
+
+ :::image type="content" source="media/disks-pools-deploy/create-a-disk-pool.png" alt-text="Screenshot of the basic blade for create a disk pool.":::
+
+#### Add disks
+
+##### Prerequisites
+
+To add a disk, it must meet the following requirements:
+
+- Must be either a premium SSD or an ultra disk in the same availability zone as the disk pool.
+ - Currently, you can only add premium SSDs in the portal. Ultra disks must be added with either the Azure PowerShell module or the Azure CLI.
+ - For ultra disks, it must have a disk sector size of 512 bytes.
+- Must be a shared disk with a maxShares value of two or greater.
+- You must grant RBAC permissions to the resource provide of disk pool to manage the disk you plan to add.
+
+If your disk meets these requirements, you can add it to a disk pool by selecting **+Add disk** in the disk pool blade.
++
+### Enable iSCSI
+
+1. Select the **iSCSI** blade.
+1. Select **Enable iSCSI**.
+1. Enter the name of the iSCSI target, the iSCSI target IQN will generate based on this name.
+ - If you want to disable the iSCSI target for an individual disk, you can do this by selecting **Disable** under **Status** for an individual disk.
+ - The ACL mode is set to **Dynamic** by default. To use your disk pool as a storage solution for Azure VMware Solution, the ACL mode must be set to **Dynamic**.
+1. Select **Review + create**.
+
+ :::image type="content" source="media/disks-pools-deploy/create-a-disk-pool-iscsi-blade.png" alt-text="Screenshot of the iscsi blade for create a disk pool.":::
+
+# [PowerShell](#tab/azure-powershell)
+
+The provided script performs the following:
+- Installs the necessary module for creating and using disk pools.
+- Creates a disk and assigns RBAC permissions to it. If you already did this, you can comment out these sections of the script.
+- Creates a disk pool and adds the disk to it.
+- Creates and enable an iSCSI target.
+
+Replace the variables in this script with your own variables before running the script. You'll also need to modify it to use an existing ultra disk if you've filled out the ultra disk form.
+
+```azurepowershell
+# Install the required module for Disk Pool
+Install-Module -Name Az.DiskPool -RequiredVersion 0.1.1 -Repository PSGallery
+
+# Sign in to the Azure account and setup the variables
+$subscriptionID = "<yourSubID>"
+Set-AzContext -Subscription $subscriptionID
+$resourceGroupName= "<yourResourceGroupName>"
+$location = "<desiredRegion>"
+$diskName = "<desiredDiskName>"
+$availabilityZone = "<desiredAvailabilityZone>"
+$subnetId='<yourSubnetID>'
+$diskPoolName = "<desiredDiskPoolName>"
+$iscsiTargetName = "<desirediSCSITargetName>" # This will be used to generate the iSCSI target IQN name
+$lunName = "<desiredLunName>"
+
+# You can skip this step if you have already created the disk and assigned proper RBAC permission to the resource group the disk is deployed to
+$diskconfig = New-AzDiskConfig -Location $location -DiskSizeGB 1024 -AccountType Premium_LRS -CreateOption Empty -zone $availabilityZone -MaxSharesCount 2
+$disk = New-AzDisk -ResourceGroupName $resourceGroupName -DiskName $diskName -Disk $diskconfig
+$diskId = $disk.Id
+$scopeDef = "/subscriptions/" + $subscriptionId + "/resourceGroups/" + $resourceGroupName
+$rpId = (Get-AzADServicePrincipal -SearchString "StoragePool Resource Provider").id
+
+New-AzRoleAssignment -ObjectId $rpId -RoleDefinitionName "Virtual Machine Contributor" -Scope $scopeDef
+
+# Create a Disk Pool
+New-AzDiskPool -Name $diskPoolName -ResourceGroupNa