Updates from: 08/03/2021 03:05:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 05/28/2021 Last updated : 08/02/2021
After the ECMA Connector Host schema mapping has been configured, start the serv
1. In **Event Viewer**, expand **Applications and Services** logs, and select **Microsoft ECMA2Host Logs**. 1. As changes are received by the connector host, events will be written to the application log.
+### Common errors
+
+| Error | Resolution |
+| -- | -- |
+| Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\Cache\8b514472-c18a-4641-9a44-732c296534e8\Microsoft.IAM.Connector.GenericSql.dll' or one of its dependencies. Access is denied. | Ensure that the network service account has 'full control' permissions over the cache folder. |
+| Invalid LDAP style of object's DN. DN: username@domain.com" | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host.|
+ ## Understand incoming SCIM requests Requests made by Azure AD to the provisioning agent and connector host use the SCIM protocol. Requests made from the host to apps use the protocol the app supports. The requests from the host to the agent to Azure AD rely on SCIM. You can learn more about the SCIM implementation in [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md).
By using Azure AD, you can monitor the provisioning service in the cloud and col
- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md) - [Generic SQL connector](on-premises-sql-connector-configure.md)-- [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
+- [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
||:--:|::|::|::|:--:|--| | AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 | | Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact |
-| Excelsecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
-| Feitian | ![y] | ![y]| ![y]| ![y]| ![n] | https://shop.ftsafe.us/pages/microsoft |
+| Excelsecu | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
+| Feitian | ![y] | ![y]| ![y]| ![y]| ![y] | https://shop.ftsafe.us/pages/microsoft |
| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key | | HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | | Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido | | IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | | KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |
-| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/product |
+| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/product |
| OneSpan Inc. | ![y] | ![n]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido | | Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices |
+| Thetis | ![y] | |[y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 |
| Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key | | TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
The following providers offer FIDO2 security keys of different form factors that
> [!NOTE] > If you purchase and plan to use NFC-based security keys, you need a supported NFC reader for the security key. The NFC reader isn't an Azure requirement or limitation. Check with the vendor for your NFC-based security key for a list of supported NFC readers.
-If you're a vendor and want to get your device on this list of supported devices, check out our guidance on how to [become a Microsoft-compatible FIDO2 security key vendor](/security/zero-trust/isv/fido2-hardware-vendor).
+If you're a vendor and want to get your device on this list of supported devices, check out our guidance on how to [become a Microsoft-compatible FIDO2 security key vendor](concept-fido2-hardware-vendor.md).
To get started with FIDO2 security keys, complete the following how-to:
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
+
+ Title: Become a Microsoft-Compatible FIDO2 Security Key Vendor for sign-in to Azure AD
+description: Explains process to become a FIDO2 hardware partner
Last updated : 08/02/2021+++++++++
+# Become a Microsoft-compatible FIDO2 security key vendor
+
+Most hacking related breaches use either stolen or weak passwords. Often, IT will enforce stronger password complexity or frequent password changes to reduce the risk of a security incident. However, this increases help desk costs and leads to poor user experiences as users are required to memorize or store new, complex passwords.
+
+FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials which cannot be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization.
+
+Microsoft partners with FIDO2 security key vendors to ensure that security devices work on Windows, the Microsoft Edge browser, and online Microsoft accounts, to enable strong password-less authentication.
+
+You can become a Microsoft-compatible FIDO2 security key vendor through the following process. Microsoft doesn't commit to do go-to-market activities with the partner and will evaluate partner priority based on customer demand.
+
+1. First, your authenticator needs to have a FIDO2 certification. We will not be able to work with providers who do not have a FIDO2 certification. To learn more about the certification, please visit this website: [https://fidoalliance.org/certification/](https://fidoalliance.org/certification/)
+2. After you have a FIDO2 certification, please fill in your request to our form here: [https://forms.office.com/r/NfmQpuS9hF](https://forms.office.com/r/NfmQpuS9hF). Our engineering team will only test compatibility of your FIDO2 devices. We won't test security of your solutions.
+3. Once we confirm a move forward to the testing phase, the process usually take about 3-6 months. The steps usually involve:
+ - Initial discussion between Microsoft and your team.
+ - Verify FIDO Alliance Certification or the path to certification if not complete
+ - Receive an overview of the device from the vendor
+ - Microsoft will share our test scripts with you. Our engineering team will be able to answer questions if you have any specific needs.
+ - You will complete and send all passed results to Microsoft Engineering team
+ - Once Microsoft confirms, you will send multiple hardware/solution samples of each device to Microsoft Engineering team
+ - Upon receipt Microsoft Engineering team will conduct test script verification and user experience flow
+4. Upon successful passing of all tests by Microsoft Engineering team, Microsoft will confirm vendor's device is listed in [the FIDO MDS](https://fidoalliance.org/metadata/).
+5. Microsoft will add your FIDO2 Security Key on Azure AD backend and to our list of approved FIDO2 vendors.
+
+## Current partners
+
+The following table lists partners who are Microsoft-compatible FIDO2 security key vendors.
+
+| **Provider** | **Link** |
+| | |
+| AuthenTrend | [https://authentrend.com/about-us/#pg-35-3](https://authentrend.com/about-us/#pg-35-3) |
+| Ensurity | [https://www.ensurity.com/contact](https://www.ensurity.com/contact) |
+| Excelsecu | [https://www.excelsecu.com/productdetail/esecufido2secu.html](https://www.excelsecu.com/productdetail/esecufido2secu.html) |
+| Feitian | [https://ftsafe.us/pages/microsoft](https://ftsafe.us/pages/microsoft) |
+| Go-Trust ID | [https://www.gotrustid.com/](https://www.gotrustid.com/idem-key) |
+| HID | [https://www.hidglobal.com/contact-us](https://www.hidglobal.com/contact-us) |
+| Hypersecu | [https://www.hypersecu.com/hyperfido](https://www.hypersecu.com/hyperfido) |
+| IDmelon Technologies Inc. | [https://www.idmelon.com/#idmelon](https://www.idmelon.com/#idmelon) |
+| Kensington | [https://www.kensington.com/solutions/product-category/why-biometrics/](https://www.kensington.com/solutions/product-category/why-biometrics/) |
+| KONA I | [https://konai.com/business/security/fido](https://konai.com/business/security/fido) |
+| Nymi | [https://www.nymi.com/product](https://www.nymi.com/product) |
+| OneSpan Inc. | [https://www.onespan.com/products/fido](https://www.onespan.com/products/fido) |
+| Thales | [https://cpl.thalesgroup.com/access-management/authenticators/fido-devices](https://cpl.thalesgroup.com/access-management/authenticators/fido-devices) |
+| Thetis | [https://thetis.io/collections/fido2](https://thetis.io/collections/fido2) |
+| Token2 Switzerland | [https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key](https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key) |
+| TrustKey Solutions | [https://www.trustkeysolutions.com/security-keys/](https://www.trustkeysolutions.com/security-keys/) |
+| VinCSS | [https://passwordless.vincss.net](https://passwordless.vincss.net/) |
+| Yubico | [https://www.yubico.com/solutions/passwordless/](https://www.yubico.com/solutions/passwordless/) |
+
+## Next steps
+
+[FIDO2 Compatibility](fido2-compatibility.md)
+
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/whats-new-docs.md
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## July 2021
+
+### New articles
+
+- [Azure AD application registration security best practices](security-best-practices-for-app-registration.md)
+- [Role-based access control for application developers](custom-rbac-for-developers.md)
+
+### Updated articles
+
+- [How to migrate a JavaScript app from ADAL.js to MSAL.js](msal-compare-msal-js-and-adal-js.md)
+- [How to migrate a Node.js app from ADAL to MSAL](msal-node-migration.md)
+- [Migrate confidential client applications from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md)
+- [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md)
+- [Protected web API: Verify scopes and app roles](scenario-protected-web-api-verification-scope-app-roles.md)
+- [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-v2-aspnet-core-webapp.md)
+ ## June 2021
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Quickstart: Register an application with the Microsoft identity platform](quickstart-register-app.md) - [Quickstart: Call an ASP.NET web API that's protected by Microsoft identity platform](quickstart-v2-dotnet-native-aspnet.md) - [Tutorial: Sign in users and call the Microsoft Graph API from an Android application](tutorial-v2-android.md)-
-## April 2021
-
-### New articles
--- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [How to migrate a Node.js app from ADAL to MSAL](msal-node-migration.md)-
-### Updated articles
--- [Configurable token lifetimes in the Microsoft identity platform (preview)](active-directory-configurable-token-lifetimes.md)-- [Configure token lifetime policies (preview)](configure-token-lifetimes.md)-- [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md)-- [Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md)-- [Quickstart: Sign in users and get an access token in a Node web app using the auth code flow](quickstart-v2-nodejs-webapp-msal.md)-- [Quickstart: Sign in users and get an access token in an Angular single-page application](quickstart-v2-angular.md)-- [Single-page application: Acquire a token to call an API](scenario-spa-acquire-token.md)-- [Single-page application: Code configuration](scenario-spa-app-configuration.md)-- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md)-- [Use MSAL in a national cloud environment](msal-national-cloud.md)-- [Understanding Azure AD application consent experiences](application-consent-experience.md)
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-group-advanced.md
A user can be a member of multiple groups with licenses. Here are some things to
## Direct licenses coexist with group licenses
-When a user inherits a license from a group, you can't directly remove or modify that license assignment in the user's properties. You can change the license assignment only in the group and the changes are then propagated to all users. It is possible, however, to assign the same product license to the user directly and by group license assignment. In this way, you can enable additional services from the product just for one user, without affecting other users.
+When a user inherits a license from a group, you can't directly remove or modify that license assignment in the user's properties. You can change the license assignment only in the group and the changes are then propagated to all users. If you need to assign any additional features to a user that has their license from a group license assignment you must create another group to assign the additional features to the user.
Directly assigned licenses can be removed, and donΓÇÖt affect a user's inherited licenses. Consider the user who inherits an Office 365 Enterprise E3 license from a group.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on June 7th, 2021.
+>This information last updated on August 2nd, 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | |
+| AI Builder Capacity add-on | CDSAICAPACITY | d2dea78b-507c-4e56-b400-39447f4738f8 | CDSAICAPACITY (a7c70a41-5e02-4271-93e6-d9b4184d83f5)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | AI Builder capacity add-on (a7c70a41-5e02-4271-93e6-d9b4184d83f5)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| APP CONNECT IW | SPZA_IW | 8f0c5670-4e56-4892-b06d-91c085d7004f | SPZA (0bfc98ed-1dbc-4a97-b246-701754e48b17)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | APP CONNECT (0bfc98ed-1dbc-4a97-b246-701754e48b17)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft 365 Audio Conferencing | MCOMEETADV | 0c266dff-15dd-4b49-8397-2bb16070ed52 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | | AZURE ACTIVE DIRECTORY BASIC | AAD_BASIC | 2b9c8e7c-319c-43a2-a2a0-48c5c6161de7 | AAD_BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | MICROSOFT AZURE ACTIVE DIRECTORY BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| DYNAMICS 365 FOR SALES ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT | DYN365_SCM | f2e48cb3-9da0-42cd-8464-4a54ce198ad0 | DYN365_CDS_SUPPLYCHAINMANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>D365_SCM (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | COMMON DATA SERVICE FOR DYNAMICS 365 SUPPLY CHAIN MANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYNAMICS 365 FOR FINANCE AND OPERATIONS, ENTERPRISE EDITION - REGULATORY SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | DYNAMICS 365 FOR TEAM MEMBERS ENTERPRISE EDITION | DYN365_ENTERPRISE_TEAM_MEMBERS | 8e7a3d30-d97d-43ab-837c-d7701cef83dc | DYN365_Enterprise_Talent_Attract_TeamMember (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_Enterprise_Talent_Onboard_TeamMember (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYN365_ENTERPRISE_TEAM_MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>Dynamics_365_for_Retail_Team_members (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>Dynamics_365_for_Talent_Team_members (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TEAM MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
-| DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS | DYN365_ENTERPRISE_P1_IW | 338148b6-1b11-4102-afb9-f92b6cdc0f8d | DYN365_ENTERPRISE_P1_IW (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS | DYN365_ENTERPRISE_P1_IW | 338148b6-1b11-4102-afb9-f92b6cdc0f8d | DYN365_ENTERPRISE_P1_IW (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Remote Assist | MICROSOFT_REMOTE_ASSIST | 7a551360-26c4-4f61-84e6-ef715673e083 | CDS_REMOTE_ASSIST (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>MICROSOFT_REMOTE_ASSIST (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | Common Data Service for Remote Assist (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>Microsoft Remote Assist (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929) |
+| Dynamics 365 Remote Assist HoloLens | MICROSOFT_REMOTE_ASSIST_HOLOLENS | e48328a2-8e98-4484-a70f-a99f8ac9ec89 | CDS_REMOTE_ASSIST (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>MICROSOFT_REMOTE_ASSIST (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | Common Data Service for Remote Assist (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>Microsoft Remote Assist (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929) |
| DYNAMICS 365 TALENT: ONBOARD | DYNAMICS_365_ONBOARDING_SKU | b56e7ccc-d5c7-421f-a23b-5c18bdbad7c0 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_Talent_Onboard (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | COMMON DATA SERVICE (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) | | ENTERPRISE MOBILITY + SECURITY E3 | EMS | efccb6f7-5641-4e0e-bd10-b4976e1bf68e | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR IDENTITY (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| Enterprise Mobility + Security G3 GCC | EMS_GOV | c793db86-5237-494e-9b11-dcd4877c2c8c | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| EXCHANGE ONLINE (PLAN 1) | EXCHANGESTANDARD | 4b9405b0-7788-4568-add1-99614e613b69 | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c) | EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)| | EXCHANGE ONLINE (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) | | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE | EXCHANGEARCHIVE_ADDON | ee02fd1b-340e-4a4b-b355-4a514e4c8943 | EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT POWER APPS PLAN 2 TRIAL | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | COMMON DATA SERVICE ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW P2 VIRAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS TRIAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | | MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | MICROSOFT STREAM | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) |
-| MICROSOFT TEAM (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
+| MICROSOFT TEAMS (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (s8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
+| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
| Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for students | ENTERPRISEPREMIUM_STUDENT | ee656612-49fa-43e5-b67e-cb1fdf7699df | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 Advanced Compliance | EQUIVIO_ANALYTICS | 1b1b1f7a-8355-43b6-829f-336cfccb744c | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| ONEDRIVE FOR BUSINESS (PLAN 1) | WACONEDRIVESTANDARD | e6778190-713e-4e4f-9119-8b8238de25df | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | ONEDRIVE FOR BUSINESS (PLAN 2) | WACONEDRIVEENTERPRISE | ed01faf2-1d88-4947-ae91-45ca18703a96 | ONEDRIVEENTERPRISE (afcafa6a-d966-4462-918c-ec0b4e0fe642)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | ONEDRIVEENTERPRISE (afcafa6a-d966-4462-918c-ec0b4e0fe642)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | POWERAPPS AND LOGIC FLOWS | POWERAPPS_INDIVIDUAL_USER | 87bbbc60-4754-4998-8c88-227dca264858 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERFLOWSFREE (0b4346bb-8dc3-4079-9dfc-513696f56039)<br/>POWERVIDEOSFREE (2c4ec2dc-c62d-4167-a966-52a3e6374015)<br/>POWERAPPSFREE (e61a2945-1d4e-4523-b6e7-30ba39d20f32) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>LOGIC FLOWS (0b4346bb-8dc3-4079-9dfc-513696f56039)<br/>MICROSOFT POWER VIDEOS BASIC (2c4ec2dc-c62d-4167-a966-52a3e6374015)<br/>MICROSOFT POWERAPPS (e61a2945-1d4e-4523-b6e7-30ba39d20f32) |
+| Power Automate per flow plan | FLOW_BUSINESS_PROCESS | b3a42176-0a8c-4c3f-ba4e-f2b37fe5be6b | CDS_Flow_Business_Process (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_BUSINESS_PROCESS (7e017b61-a6e0-4bdc-861a-932846591f6e) | Common data service for Flow per business process plan (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per business process plan (7e017b61-a6e0-4bdc-861a-932846591f6e) |
+| Power Automate unattended RPA add-on | POWERAUTOMATE_UNATTENDED_RPA | 3539d28c-6e35-4a30-b3a9-cd43d5d3e0e2 |CDS_UNATTENDED_RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_AUTOMATE_UNATTENDED_RPA (0d373a98-a27a-426f-8993-f9a425ae99c5) | Common Data Service Unattended RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate Unattended RPA add-on (0d373a98-a27a-426f-8993-f9a425ae99c5) |
| POWER BI (FREE) | POWER_BI_STANDARD | a403ebcc-fae0-4ca2-8c8c-7a907fd6c235 | BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | POWER BI (FREE) (2049e525-b859-401b-b2a0-e0a31c4b1fe4)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | POWER BI FOR OFFICE 365 ADD-ON | POWER_BI_ADDON | 45bc2c81-6072-436a-9b0b-3b12eefbc402 | BI_AZURE_P1 (2125cfd7-2110-4567-83c4-c1cd5275163d)<br/>SQL_IS_SSIM (fc0a60aa-feee-4746-a0e3-aecfe81a38dd) |MICROSOFT POWER BI REPORTING AND ANALYTICS PLAN 1 (2125cfd7-2110-4567-83c4-c1cd5275163d)<br/>MICROSOFT POWER BI INFORMATION SERVICES PLAN 1(fc0a60aa-feee-4746-a0e3-aecfe81a38dd) | | POWER BI PRO | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
+| Power BI Pro | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION( 113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
+| Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) |
| PROJECT FOR OFFICE 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | | PROJECT ONLINE ESSENTIALS | PROJECTESSENTIALS | 776df282-9fc0-4862-99e2-70e561b9909e | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | PROJECT ONLINE PREMIUM | PROJECTPREMIUM | 09015f9f-377f-4538-bbb5-f75ceb09358a | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| VISIO ONLINE PLAN 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO DESKTOP APP (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO PLAN 2 FOR GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | | WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) |
-| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>VIRTUALIZATION RIGHTS FOR WINDOWS 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) |
+| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) |
| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) | | WINDOWS STORE FOR BUSINESS | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) |
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
The following scenarios are not supported for staged rollout:
- When you first add a security group for staged rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required. -- While users are in Staged Rollout, when EnforceCloudPasswordPolicyForPasswordSyncedUsers is enabled, password expiration policy is set to 90 days with no option to customize it.
+- While users are in Staged Rollout, password expiration policy is set to 90 days with no option to customize it.
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for Windows 10 version older than 1903. This scenario will fall back to the WS-Trust endpoint of the federation server, even if the user signing in is in scope of staged rollout.
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-export-risk-data.md
Azure AD stores reports and security signals for a defined period of time. When
| Audit logs | 7 days | 30 days | 30 days | | Sign-ins | 7 days | 30 days | 30 days | | Azure AD MFA usage | 30 days | 30 days | 30 days |
-| Users at risk | 7 days | 30 days | 90 days |
-| Risky sign-ins | 7 days | 30 days | 90 days |
+| Risky sign-ins | 7 days | 30 days | 30 days |
Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers** and **UserRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an Event Hub, or send data to a partner solution. Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+>[!NOTE]
+>The diagnostic settings for RiskyUsers and UserRiskEvents are currently in public preview.
+ [ ![Diagnostic settings screen in Azure AD showing existing configuration](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png) ](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png#lightbox) ## Log Analytics
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/administrative-units.md
It can be useful to restrict administrative scope by using administrative units
A central administrator could: -- Create a role with administrative permissions over only Azure AD users in the business school administrative unit. - Create an administrative unit for the School of Business.-- Populate the administrative unit with only the business school students and staff.
+- Populate the administrative unit with only students and staff within the School of Business.
+- Create a role with administrative permissions over only Azure AD users in the School of Business administrative unit.
- Add the business school IT team to the role, along with its scope. ## License requirements
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-verification-solution.md
Microsoft Authenticator is the mobile application that orchestrates the interact
#### Web front end
-The relying party web frontend uses the Azure AD VC APIs or SDK to verify VCs by generating deep links or QR codes that are consumed by the subjectΓÇÖs wallet. Depending on the scenario, the frontend can be a publicly accessible or internal website to enable end-user experiences that require verification. However, the endpoints that the wallet accesses must be publicly accessible. Specifically, it controls redirection to the wallet with specific request parameters. This is accomplished using the Microsoft-provided APIs and SDK.
+The relying party web front end uses the Azure AD VC APIs or SDK to verify VCs by generating deep links or QR codes that are consumed by the subjectΓÇÖs wallet. Depending on the scenario, the front end can be a publicly accessible or internal website to enable end-user experiences that require verification. However, the endpoints that the wallet accesses must be publicly accessible. Specifically, it controls redirection to the wallet with specific request parameters. This is accomplished using the Microsoft-provided APIs and SDK.
#### Business logic
-You can create new logic or use existing logic that is specific to the relying party, and enhance that logic with the presentation of VCs.
+You can create new logic or use existing logic that is specific to the relying party and enhance that logic with the presentation of VCs.
## Scenario-specific designs
Verifiable credentials can also be used to enable faster onboarding by replacing
#### Additional elements
-**Onboarding portal**: This is a web frontend that orchestrates the Azure AD VC APIs/SDKs calls for VC presentation and validation, and the logic to onboard accounts.
+**Onboarding portal**: This is a web front end that orchestrates the Azure AD VC APIs/SDKs calls for VC presentation and validation, and the logic to onboard accounts.
**Custom logic / workflows**: Specific logic with organization-specific steps before and after updating the user account. This might include approval workflows, additional validations, logging, notifications, etc.
Verifiable credentials can also be used to enable faster onboarding by replacing
* **Storing VC Attributes**: Where possible do not store attributes from VCs in your app-specific store. Be especially careful with personal data. If this information is required by specific flows within your applications, consider asking for the VC to retrieve the claims on demand.
-* **VC Attribute correlation with backend systems**: When defining the attributes of the VC with the issuer, establish a mechanism to correlate information in the backend system after the user presents the VC. This typically uses a time-bound, unique identifier in the context of your RP in combination with the claims you receive. Some examples:
+* **VC Attribute correlation with back-end systems**: When defining the attributes of the VC with the issuer, establish a mechanism to correlate information in the back-end system after the user presents the VC. This typically uses a time-bound, unique identifier in the context of your RP in combination with the claims you receive. Some examples:
- * **New employee**: When the HR workflow reaches the point where identity proofing is required, the RP can generate a link with a time-bound unique identifier and send it to the candidateΓÇÖs email address on the HR system. This unique identifier should be sufficient to correlate information such as Firstname, LastName from the VC verification request to the HR record or underlying data. The attributes in the VC can be used to complete user attributes in the HR system, or to validate accuracy of user attributes about the employee.
+ * **New employee**: When the HR workflow reaches the point where identity proofing is required, the RP can generate a link with a time-bound unique identifier and send it to the candidateΓÇÖs email address on the HR system. This unique identifier should be sufficient to correlate information such as firstName, lastName from the VC verification request to the HR record or underlying data. The attributes in the VC can be used to complete user attributes in the HR system, or to validate accuracy of user attributes about the employee.
* **External identities** - invitation: When an existing user in your organization invites an external user to be onboarded in the target system, the RP can generate a link with a unique identifier that represents the invitation transaction and sends it to the external userΓÇÖs email address. This unique identifier should be sufficient to correlate the VC verification request to the invitation record or underlying data and continue the provisioning workflow. The attributes in the VC can be used to validate or complete the external user attributes.
Verifiable credentials can be used as additional proof to access to sensitive ap
#### Additional elements
-**Relying party web frontend**: This is the web frontend of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
+**Relying party web front end**: This is the web front end of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
**User access authorization logic**: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
-**Other backend services and dependencies**: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
+**Other back-end services and dependencies**: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
#### Design Considerations
The decentralized nature of verifiable credentials enables this scenario without
#### Additional elements
-**Relying party web frontend**: This is the web frontend of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
+**Relying party web front end**: This is the web front end of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
**User access authorization logic**: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
The decentralized nature of verifiable credentials enables this scenario without
### Account recovery
-Verifiable credentials can be used as an approach to account recovery. For example, when a user needs to recover their account they might access a website that requires them to present a VC and initiate an Azure AD credential reset by calling MS Graph APIs as shown in the following diagram.
+Verifiable credentials can be used as an approach to account recovery. For example, when a user needs to recover their account, they might access a website that requires them to present a VC and initiate an Azure AD credential reset by calling MS Graph APIs as shown in the following diagram.
Note: While the scenario we describe in this section is specific to recover Azure AD accounts, this approach can also be used to recover accounts in other systems.
Similarly, you can use a VC to generate a temporary access pass that will allow
**Authorization**: Create an authorization mechanism such as a security group that the RP checks before proceeding with the credential recovery. For example, only users in specific groups might be eligible to recover an account with a VC.
-**Interaction with Azure AD**: The service-to-service communication between the web frontend and Azure AD must be secured as a highly privileged system, because it can reset employeesΓÇÖ credentials. Grant the web frontend the least privileged roles possible. Some examples include:
+**Interaction with Azure AD**: The service-to-service communication between the web front end and Azure AD must be secured as a highly privileged system because it can reset employeesΓÇÖ credentials. Grant the web front end the least privileged roles possible. Some examples include:
* Grant the RP website the ability to use a service principal granted the MS Graph scope UserAuthenticationMethod.ReadWrite.All to reset authentication methods. DonΓÇÖt grant the User.ReadWrite.All, which enables the ability to create and delete users.
As with any solution, you must plan for performance. Focus areas include latency
The following provides areas to consider when planning for performance:
-* The Azure AD Verifiable Credentials issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. To limit latency, deploy your verification frontend (website) and key vault in the region listed above that is closest to where requests are expected to originate from.
+* The Azure AD Verifiable Credentials issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. To limit latency, deploy your verification front end (website) and key vault in the region listed above that is closest to where requests are expected to originate from.
* Model based on throughput:
As you plan for operations, we recommend plan that you capture each attempt of c
As part of your operational planning, consider monitoring the following:
-* For scalability:
+* **For scalability**:
* Monitor failed VC validation as a part of end-to-end security metrics of applications. * Monitor end-to-end latency of credential verification.
-* For reliability and dependencies:
+* **For reliability and dependencies**:
* Monitor underlying dependencies used by the verification solution. * Follow [Azure Key Vault monitoring and alerting](../../key-vault/general/alert.md).
-* For security:
+* **For security**:
* Enable logging for Key Vault to track signing operations, as well as to monitor and alert on configuration changes. Refer to [How to enable Key Vault logging](../../key-vault/general/howto-logging.md) for more information.
Learn more about architecting VC solutions
Implement Verifiable Credentials
-[Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md)
+ * [Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md)
-[Get started with Verifiable Credentials](get-started-verifiable-credentials.md)
+ * [Get started with Verifiable Credentials](get-started-verifiable-credentials.md)
[FAQs](verifiable-credentials-faq.md)
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/developer-portal-faq.md
documentationcenter: API Management
- Previously updated : 04/15/2021+ Last updated : 07/30/2021
The call failure may also be caused by an TLS/SSL certificate, which is assigned
If your local version of the developer portal cannot save or retrieve information from the storage account or API Management instance, the SAS tokens may have expired. You can fix that by generating new tokens. For instructions, refer to the tutorial to [self-host the developer portal](developer-portal-self-host.md#step-2-configure-json-files-static-website-and-cors-settings).
+## How do I disable sign-up in the developer portal?
+
+If you don't need the sign-up functionality enabled by default in the developer portal, you can disable it with these steps:
+
+1. In the Azure portal, navigate to your API Management instance.
+1. Under **Developer portal** in the menu, select **Identities**.
+1. Delete each identity provider that appears in the list. Select each provider, select the context menu (**...**), and select **Delete**.
+
+ :::image type="content" source="media/developer-portal-faq/delete-identity-providers.png" alt-text="Delete identity providers":::
+
+1. Navigate to the developer portal administrative interface.
+1. Remove **Sign up** links and navigation items in the portal content. For information about customizing portal content, see [Tutorial: Access and customize the developer portal](api-management-howto-developer-portal-customize.md).
+
+ :::image type="content" source="media/developer-portal-faq/delete-navigation-item.png" alt-text="Delete navigation item":::
+
+1. Modify the **Sign up** page content to remove fields used to enter identity data, in case users navigate directly to it.
+
+ Optionally, delete the **Sign up** page. Currently, you use the [contentItem](/rest/api/apimanagement/2021-01-01-preview/content-item) REST APIs to list and delete this page.
+
+1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
+ ## How can I remove the developer portal content provisioned to my API Management service? Provide the required parameters in the `scripts.v3/cleanup.bat` script in the developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal), and run the script
You can generate *user-specific tokens* (including admin tokens) using the [Get
> [!NOTE] > The token must be URL-encoded. + ## Next steps Learn more about the new developer portal:
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-staging-slots.md
If any errors occur in the target slot (for example, the production slot) after
## Configure auto swap > [!NOTE]
-> Auto swap isn't supported in web apps on Linux.
+> Auto swap isn't supported in web apps on Linux and Web App for Containers.
Auto swap streamlines Azure DevOps scenarios where you want to deploy your app continuously with zero cold starts and zero downtime for customers of the app. When auto swap is enabled from a slot into production, every time you push your code changes to that slot, App Service automatically [swaps the app into production](#swap-operation-steps) after it's warmed up in the source slot.
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 06/28/2021 Last updated : 08/02/2021
For details on using managed identities, see [Enable managed identity for Azure
Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation:
-* Azure Run As account: Allows you to manage Azure resources based on the Azure Resource Manager deployment and management service for Azure.
-* Azure Classic Run As account: Allows you to manage Azure classic resources based on the Classic deployment model.
+To create or renew a Run As account, permissions are needed at three levels:
+
+- Subscription,
+- Azure Active Directory (Azure AD), and
+- Automation account
+
+### Subscription permissions
+
+You need the `Microsoft.Authorization/*/Write` permission. This permission is obtained through membership of one of the following Azure built-in roles:
+
+- [Owner](../role-based-access-control/built-in-roles.md#owner)
+- [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator)
+
+To configure or renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
+
+### Azure AD permissions
+
+To be able to create or renew the service principal, you need to be a member of one of the following Azure AD built-in roles:
+
+- [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator)
+- [Application Developer](../active-directory/roles/permissions-reference.md#application-developer)
+
+Membership can be assigned to **ALL** users in the tenant at the directory level, which is the default behavior. You can grant membership to either role at the directory level. For more information, see [Who has permission to add applications to my Azure AD instance?](../active-directory/develop/active-directory-how-applications-are-added.md#who-has-permission-to-add-applications-to-my-azure-ad-instance).
+
+### Automation account permissions
+
+To be able to create or update the Automation account, you need to be a member of one of the following Automation account roles:
+
+- [Owner](./automation-role-based-access-control.md#owner)
+- [Contributor](./automation-role-based-access-control.md#contributor)
+- [Custom Azure Automation Contributor](./automation-role-based-access-control.md#custom-azure-automation-contributor-role)
To learn more about the Azure Resource Manager and Classic deployment models, see [Resource Manager and classic deployment](../azure-resource-manager/management/deployment-models.md).
automation Manage Runas Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runas-account.md
Title: Manage an Azure Automation Run As account
description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal. Previously updated : 05/17/2021 Last updated : 08/02/2021
In this article we cover how to manage a Run as or Classic Run As account, inclu
* How to renew a certificate from an enterprise or third-party certificate authority (CA) * Manage permissions for the Run As account
-To learn more about Azure Automation account authentication and guidance related to process automation scenarios, see [Automation Account authentication overview](automation-security-overview.md).
+To learn more about Azure Automation account authentication, permissions required to manage the Run as account, and guidance related to process automation scenarios, see [Automation Account authentication overview](automation-security-overview.md).
## <a name="cert-renewal"></a>Renew a self-signed certificate
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
To achieve comprehensive business continuity on Azure, build your application ar
| [App Service Environments](../app-service/environment/zone-redundancy.md) | :large_blue_diamond: | | [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) | :large_blue_diamond: | | [Azure API Management](../api-management/zone-redundancy.md) | :large_blue_diamond: |
+| [Azure App Configuration](../azure-app-configuration/faq.yml#how-does-app-configuration-ensure-high-data-availability) | :large_blue_diamond: |
| [Azure Bastion](../bastion/bastion-overview.md) | :large_blue_diamond: | | [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | :large_blue_diamond: | | [Azure Cognitive Search](../search/search-performance-optimization.md#availability-zones) | :large_blue_diamond: |
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
Event Grid Web Hooks require validation on creation. You can validate by followi
:::image type="content" source="./media/event-subscription-view-webhook.png" alt-text="Web Hook shows up in a table on the bottom of the page." ::: > [!NOTE]
-> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](/azure/event-grid/event-filtering.md) or [Service Bus subscription filters](/azure/service-bus-messaging/topic-filters.md). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
+> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](/azure/event-grid/event-filtering) or [Service Bus subscription filters](/azure/service-bus-messaging/topic-filters). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
## Verify and test application
azure-monitor Alerts Metric Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-create-templates.md
Previously updated : 7/21/2021 Last updated : 8/02/2021 # Create a metric alert with a Resource Manager template
az deployment group create \
--parameters @multidimensionalstaticmetricalert.parameters.json ```
+> [!NOTE]
+>
+> Using "All" as a dimension value is equivalent to selecting "\*" (all current and future values).
+ ## Template for a Dynamic Thresholds metric alert that monitors multiple dimensions
azure-monitor Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric.md
description: Learn how to use Azure portal or CLI to create, view, and manage me
Previously updated : 07/19/2021 Last updated : 08/02/2021 # Create, view, and manage metric alerts using Azure Monitor
The following procedure describes how to create a metric alert rule in Azure por
- You can also **Select all current and future values** for any of the dimensions. This will dynamically scale the selection to all current and future values for a dimension. The metric alert rule will evaluate the condition for all combinations of values selected. [Learn more about how alerting on multi-dimensional metrics works](./alerts-metric-overview.md).
+
+ > [!NOTE]
+ > Using "All" as a dimension value is equivalent to selecting "All current and future values".
9. Select the **Threshold** type, **Operator**, and **Aggregation type**. This will determine the logic that the metric alert rule will evaluate. - If you are using a **Static** threshold, continue to define a **Threshold value**. The metric chart can help determine what might be a reasonable threshold.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-overview.md
The consumption and management of alert instances requires the user to have the
You might want to query programmatically for alerts generated against your subscription. Queries might be to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
-You can query for alerts generated against your subscriptions either by using the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) or by using the [Azure Resource Graph](../../governance/resource-graph/overview.md) and the [REST API for Resources](/rest/api/azureresourcegraph/resourcegraph(2020-04-01-preview)/resources/resources).
+It is recommended you that you use [Azure Resource Graph](../../governance/resource-graph/overview.md) with the `AlertsManagementResources` schema for querying fired alerts. Resource Graph is recommended when you have to manage alerts generated across multiple subscriptions.
-The Resource Graph REST API for Resources allows you to query for alert instances at scale. Resource Graph is recommended when you have to manage alerts generated across many subscriptions.
-
-The following sample request to the Resource Graph REST API returns the count of alerts within one subscription:
+The following sample request to the Resource Graph REST API returns alerts within one subscription in the last day:
```json { "subscriptions": [ <subscriptionId> ],
- "query": "AlertsManagementResources | where type =~ 'Microsoft.AlertsManagement/alerts' | summarize count()"
+ "query": "alertsmanagementresources | where properties.essentials.lastModifiedDateTime > ago(1d) | project alertInstanceId = id, parentRuleId = tolower(tostring(properties['essentials']['alertRule'])), sourceId = properties['essentials']['sourceCreatedId'], alertName = name, severity = properties.essentials.severity, status = properties.essentials.monitorCondition, state = properties.essentials.alertState, affectedResource = properties.essentials.targetResourceName, monitorService = properties.essentials.monitorService, signalType = properties.essentials.signalType, firedTime = properties['essentials']['startDateTime'], lastModifiedDate = properties.essentials.lastModifiedDateTime, lastModifiedBy = properties.essentials.lastModifiedUserName"
} ```
-You can also see the result of this Resource Graph query in the portal with Azure Resource Graph Explorer: [portal.azure.com](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AlertsManagementResources%20%7C%20where%20type%20%3D~%20%27Microsoft.AlertsManagement%2Falerts%27%20%7C%20summarize%20count())
-
-You can query the alerts for their [essential](../alerts/alerts-common-schema-definitions.md#essentials) fields.
+You can also see the result of this Resource Graph query in the portal with Azure Resource Graph Explorer: [portal.azure.com](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0A%7C%20where%20properties.essentials.lastModifiedDateTime%20%3E%20ago(1d)%0A%7C%20project%20alertInstanceId%20%3D%20id%2C%20parentRuleId%20%3D%20tolower(tostring(properties%5B'essentials'%5D%5B'alertRule'%5D))%2C%20sourceId%20%3D%20properties%5B'essentials'%5D%5B'sourceCreatedId'%5D%2C%20alertName%20%3D%20name%2C%20severity%20%3D%20properties.essentials.severity%2C%20status%20%3D%20properties.essentials.monitorCondition%2C%20state%20%3D%20properties.essentials.alertState%2C%20affectedResource%20%3D%20properties.essentials.targetResourceName%2C%20monitorService%20%3D%20properties.essentials.monitorService%2C%20signalType%20%3D%20properties.essentials.signalType%2C%20firedTime%20%3D%20properties%5B'essentials'%5D%5B'startDateTime'%5D%2C%20lastModifiedDate%20%3D%20properties.essentials.lastModifiedDateTime%2C%20lastModifiedBy%20%3D%20properties.essentials.lastModifiedUserName)
-Use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) to get more information about specific alerts, including their [alert context](../alerts/alerts-common-schema-definitions.md#alert-context) fields.
+You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) in lower scale querying scenarios or to update fired alerts.
## Smart groups
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
Azure Monitor ensures that all data and saved queries are encrypted at rest usin
Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data ingested to dedicated clusters is being encrypted twice ΓÇö once at the service level using Microsoft-managed keys or customer-managed keys, and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
-Data ingested in the last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This data remains encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD data adheres to [key revocation](#key-revocation). We are working to have SSD data encrypted with Customer-managed key in the first half of 2021.
+Data ingested in the last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This data remains encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD data adheres to [key revocation](#key-revocation). We are working to have SSD data encrypted with Customer-managed key in the second half of 2021.
Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB/day and can have values of 500, 1000, 2000 or 5000 GB/day.
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logicapp-flow-connector.md
For example, you can create a logic app to use Azure Monitor log data in an emai
## Connector limits The Azure Monitor Logs connector has these limits:
-* Max query response size ~16.7 MB MB (16 MiB)
+* Max query response size ~16.7 MB MB (16 MiB). Connector infrastructure dictates that limit is set lower than query API limit
* Max number of records: 500,000
-* Max query timeout 110 second.
-* Chart visualizations could be available in Logs page and missing in the connector since the connector and Logs page don't use the same charting libraries currently.
+* Max query timeout 110 second
+* Chart visualizations could be available in Logs page and missing in the connector since the connector and Logs page don't use the same charting libraries currently
Depending on the size of your data and the query you use, the connector may hit its limits and fail. You can work around such cases when adjusting the trigger recurrence to run more frequently and query less data. You can use queries that aggregate your data to return less records and columns.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Considerations:
## Prerequisites The following prerequisites must be completed before configuring Log Analytics data export: -- Destinations must be created prior to the export rule configuration and should be in the same region as your Log Analytics workspace. If you need to replicate your data to other storage accounts, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md). -- The storage account must be StorageV1 or StorageV2. Classic storage is not supported
+- Destinations must be created prior to the export rule configuration and should be in the same region as your Log Analytics workspace. If you need to replicate your data to other storage accounts, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS.
+- The storage account must be StorageV1 or above. Classic storage is not supported.
- If you have configured your storage account to allow access from selected networks, you need to add an exception in your storage account settings to allow Azure Monitor to write to your storage. ## Enable data export
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-enable.md
Virtual machine must be located in one of the following regions:
- Australia Central - Australia East - Australia Southeast
+- Brazil South
+- Brazil Southeast
- Canada Central - Central India - Central US
Virtual machine must be located in one of the following regions:
- France Central - Germany West Central - Japan East
+- Japan West
- Korea Central - North Central US - North Europe
+- Norway East
- South Central US - South Africa North - Southeast Asia - Switzerland North
+- Switzerland West
+- UAE North
- UK South - UK West - West Central US
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
na ms.devlang: na Previously updated : 06/03/2021 Last updated : 08/02/2021 # Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries
Example 4 uses the reduced per-client `sunrpc.max_tcp_slot_table_entry` value of
* The client will issue no more than 8 requests in flight to the server per connection. * The server will accept no more than 128 requests in flight from this single connection.
-When using NFSv3, *you should collectively keep the storage endpoint slot count to 2,000 or less*. It is best to set the per-connection value for `sunrpc.max_tcp_slot_table_entries` to less than 128 when an application scales out across many network connections (`nconnect` and HPC in general, and EDA in particular).
+When using NFSv3, *you should collectively keep the storage endpoint slot count to 10,000 or less*. It is best to set the per-connection value for `sunrpc.max_tcp_slot_table_entries` to less than 128 when an application scales out across many network connections (`nconnect` and HPC in general, and EDA in particular).
### How to calculate the best `sunrpc.max_tcp_slot_table_entries`
The following table shows a sample study of concurrency with arbitrary latencies
### How to calculate concurrency settings by connection count
-For example, the workload is an EDA farm, and 200 clients all drive workload to the same storage end point (a storage endpoint is a storage IP address), then you calculate the required I/O rate and divide the concurrency across the farm.
+For example, if the workload is an EDA farm and 1,250 clients all drive workload to the same storage end point (a storage end point is a storage IP address), then you calculate the required I/O rate and divide the concurrency across the farm.
Assume that the workload is 4,000 MiB/s using a 256-KiB average operation size and an average latency of 10 ms. To calculate concurrency, use the following formula:
The calculation translates to a concurrency of 160:
`(160 = 16,000 × 0.010)`
-Given the need for 200 clients, you could safely set `sunrpc.max_tcp_slot_table_entries` to 2 per client to reach the 4,000 MiB/s. However, you might decide to build in extra headroom by setting the number per client to 4 or even 8, keeping under the 2000 recommended slot ceiling.
+Given the need for 1,250 clients, you could safely set `sunrpc.max_tcp_slot_table_entries` to 2 per client to reach the 4,000 MiB/s. However, you might decide to build in extra headroom by setting the number per client to 4 or even 8, keeping well under the 10,000 recommended slot ceiling.
### How to set `sunrpc.max_tcp_slot_table_entries` on the client
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-onnx.md
Title: Deploy and make predictions with ONNX
-description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge or Azure SQL Managed Instance, and then run native PREDICT on data using the uploaded ONNX model.
+description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge, and then run native PREDICT on data using the uploaded ONNX model.
keywords: deploy SQL Edge ms.prod: sql ms.technology: machine-learning
Last updated 05/06/2021
# Deploy and make predictions with an ONNX model and SQL machine learning
-In this quickstart, you'll learn how to train a model, convert it to ONNX, deploy it to [Azure SQL Edge](onnx-overview.md) or [Azure SQL Managed Instance](../azure-sql/managed-instance/machine-learning-services-overview.md), and then run native PREDICT on data using the uploaded ONNX model.
+In this quickstart, you'll learn how to train a model, convert it to ONNX, deploy it to [Azure SQL Edge](onnx-overview.md), and then run native PREDICT on data using the uploaded ONNX model.
This quickstart is based on **scikit-learn** and uses the [Boston Housing dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html).
MSE are equal
## Insert the ONNX model
-Store the model in Azure SQL Edge or Azure SQL Managed Instance, in a `models` table in a database `onnx`. In the connection string, specify the **server address**, **username**, and **password**.
+Store the model in Azure SQL Edge, in a `models` table in a database `onnx`. In the connection string, specify the **server address**, **username**, and **password**.
```python import pyodbc
FROM PREDICT(MODEL = @model, DATA = predict_input, RUNTIME=ONNX) WITH (variable1
## Next Steps * [Machine Learning and AI with ONNX in SQL Edge](onnx-overview.md)
-* [Machine Learning Services in Azure SQL Managed Instance](../azure-sql/managed-instance/machine-learning-services-overview.md)
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
This table provides a quick comparison for the change in terminology:
| Feature | Details | | | |
+| [16 TB support for SQL Managed Instance General Purpose](https://techcommunity.microsoft.com/t5/azure-sql/increased-storage-limit-to-16-tb-for-sql-managed-instance/ba-p/2421443) | Support for allocation up to 16 TB of space on SQL Managed Instance General Purpose |
+| [Migration with Log Replay Service](../managed-instance/log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. |
+| [Maintenance window](./maintenance-window.md)| The maintenance window feature allows you to configure maintenance schedule. |
| [Distributed transactions](./elastic-transactions-overview.md) | Distributed transactions across Managed Instances. | | [Instance pools](../managed-instance/instance-pools-overview.md) | A convenient and cost-efficient way to migrate smaller SQL instances to the cloud. | | [Instance-level Azure AD server principals (logins)](/sql/t-sql/statements/create-login-transact-sql) | Create instance-level logins using a [CREATE LOGIN FROM EXTERNAL PROVIDER](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) statement. |
This table provides a quick comparison for the change in terminology:
## New features
+### SQL Managed Instance H1 2021 updates
+
+- [Public Preview for Support 16 TB for SQL Managed Instance General Purpose](https://techcommunity.microsoft.com/t5/azure-sql/increased-storage-limit-to-16-tb-for-sql-managed-instance/ba-p/2421443) - support for allocation of up to 16 TB of space for SQL Managed Instance General Purpose (Public Preview)
+
+- [Migrate to Managed Instance with Log Replay Service](../managed-instance/log-replay-service-migrate.md) - allows migrating databases from SQL Server to SQL Managed Instance by using Log Replay Service (Public Preview)
+
+- [Maintenance window](./maintenance-window.md) - the maintenance window feature allows you to configure maintenance schedule, see [Maintenance window announcement](https://techcommunity.microsoft.com/t5/azure-sql/maintenance-window-for-azure-sql-database-and-managed-instance/ba-p/2174835) (Public Preview).
+ ### SQL Managed Instance H2 2019 updates - [Service-aided subnet configuration](https://azure.microsoft.com/updates/service-aided-subnet-configuration-for-managed-instance-in-azure-sql-database-available/) is a secure and convenient way to manage subnet configuration where you control data traffic while SQL Managed Instance ensures the uninterrupted flow of management traffic.
For updates and improvements to all Azure services, see [Service updates](https:
## Contribute to content
-To contribute to the Azure SQL documentation, see the [Docs contributor guide](/contribute/).
+To contribute to the Azure SQL documentation, see the [Docs contributor guide](/contribute/).
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
### Gen5 compute generation (part 1)
-|Compute size (service objective)|HS_Gen5_2|HS_Gen5_4|HS_Gen5_6|HS_Gen_8|HS_Gen5_10|HS_Gen5_12|HS_Gen5_14|
+|Compute size (service objective)|HS_Gen5_2|HS_Gen5_4|HS_Gen5_6|HS_Gen5_8|HS_Gen5_10|HS_Gen5_12|HS_Gen5_14|
|: | --: |--: |--: |--: |: | --: |--: |--: | |Compute generation|Gen5|Gen5|Gen5|Gen5|Gen5|Gen5|Gen5| |vCores|2|4|6|8|10|12|14|
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: Access to Azure SQL Database
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Microsoft Access database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for Access (SSMA for Access).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Microsoft Access database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for Access (SSMA for Access).
For other migration guides, see [Azure Database Migration Guide](/data-migration).
For more assistance with completing this migration scenario, see the following r
| Title | Description | | | |
-| [Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
+| [Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Last updated 05/14/2021
# Migration guide: IBM Db2 to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your IBM Db2 databases to Azure SQL Database, by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for Db2.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your IBM Db2 databases to Azure SQL Database, by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for Db2.
For other migration guides, see [Azure Database Migration Guides](/data-migration).
For additional assistance, see the following resources, which were developed in
|Asset |Description | |||
-|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
+|[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
|[Db2 zOS data assets discovery and assessment package](https://www.microsoft.com/download/details.aspx?id=103108)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including \*.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.| |[IBM Db2 LUW inventory scripts and artifacts](https://www.microsoft.com/download/details.aspx?id=103109)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[IBM Db2 to SQL DB - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|The Database Compare utility is a Windows console application that you can use to verify that the data is identical both on source and target platforms. You can use the tool to efficiently compare data down to the row or column level in all or selected tables, rows, and columns.|
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: MySQL to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your MySQL database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for MySQL (SSMA for MySQL).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your MySQL database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for MySQL (SSMA for MySQL).
For other migration guides, see [Azure Database Migration Guide](/data-migration).
For more assistance with completing this migration scenario, see the following r
| Title | Description | | | |
-| [Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
+| [Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
|[MySQL to SQL DB - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|The Database Compare utility is a Windows console application that you can use to verify that the data is identical both on source and target platforms. You can use the tool to efficiently compare data down to the row or column level in all or selected tables, rows, and columns.| The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
Last updated 08/25/2020
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for Oracle (SSMA for Oracle).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for Oracle (SSMA for Oracle).
For other migration guides, see [Azure Database Migration Guides](/data-migration).
For more assistance with completing this migration scenario, see the following r
| **Title/link** | **Description** | | - | -- |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
-| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
-| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run an SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
-| [SSMA for Oracle Common Errors and How to Fix Them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a nonscalar condition in the WHERE clause. However, SQL Server doesn't support this type of condition. As a result, SSMA for Oracle doesn't convert queries with a nonscalar condition in the WHERE clause. Instead, it generates the error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
-| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server Database. If the migration requires changes to features or functionality, the possible impact of each change on the applications that use the database must be considered carefully. |
-|[Oracle to SQL DB - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|SSMA for Oracle Tester is the recommended tool to automatically validate the database object conversion and data migration, and it's a superset of Database Compare functionality.<br /><br />If you're looking for an alternative data validation option, you can use the Database Compare utility to compare data down to the row or column level in all or selected tables, rows, and columns.|
+| [Data Workload Assessment Model and Tool](https://www.microsoft.com/download/details.aspx?id=103130) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://www.microsoft.com/download/details.aspx?id=103121) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://www.microsoft.com/download/details.aspx?id=103120) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run an SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [Oracle to SQL DB - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|SSMA for Oracle Tester is the recommended tool to automatically validate the database object conversion and data migration, and it's a superset of Database Compare functionality.<br /><br />If you're looking for an alternative data validation option, you can use the Database Compare utility to compare data down to the row or column level in all or selected tables, rows, and columns.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Sap Ase To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sap-ase-to-sql-database.md
Last updated 03/19/2021
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for SAP Adapter Server Enterprise.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for SAP Adapter Server Enterprise.
For other migration guides, see [Azure Database Migration Guide](/data-migration).
For other migration guides, see [Azure Database Migration Guide](/data-migration
Before you begin migrating your SAP SE database to your SQL database, do the following: - Verify that your source environment is supported. -- Download and install [SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256).
+- Download and install [SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP Sybase ASE)](https://www.microsoft.com/download/details.aspx?id=54256).
- Ensure that you have connectivity and sufficient permissions to access both source and target. ## Pre-migration
After you've met the prerequisites, you're ready to discover the topology of you
### Assess
-By using [SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256), you can review database objects and data, assess databases for migration, migrate Sybase database objects to your SQL database, and then migrate data to the SQL database. To learn more, see [SQL Server Migration Assistant for Sybase (SybaseToSQL)](/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql).
+By using [SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE)](https://www.microsoft.com/download/details.aspx?id=54256), you can review database objects and data, assess databases for migration, migrate Sybase database objects to your SQL database, and then migrate data to the SQL database. To learn more, see [SQL Server Migration Assistant for Sybase (SybaseToSQL)](/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql).
To create an assessment, do the following:
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
For more migration information, see the [migration overview](sql-server-to-sql-d
## Prerequisites
-For your [SQL Server migration](https://azure.microsoft.com/en-us/migration/sql-server/) to Azure SQL Database, make sure you have:
+For your [SQL Server migration](https://azure.microsoft.com/migration/sql-server/) to Azure SQL Database, make sure you have:
- Chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools . - Installed [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server.
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
For more assistance, see the following resources that were developed for real-wo
|Asset |Description | ||| |[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and an application/database remediation level for a workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform decision process for target platforms.|
-|[DBLoader utility](https://github.com/microsoft/DataMigrationTeam/tree/master/DBLoader%20Utility)|You can use DBLoader to load data from delimited text files into SQL Server. This Windows console utility uses the SQL Server native client bulk-load interface. The interface works on all versions of SQL Server, along with Azure SQL Database.|
|[Bulk database creation with PowerShell](https://www.microsoft.com/download/details.aspx?id=103107)|You can use a set of three PowerShell scripts that create a resource group (create_rg.ps1), the [logical server in Azure](../../database/logical-servers.md) (create_sqlserver.ps1), and a SQL database (create_sqldb.ps1). The scripts include loop capabilities so you can iterate and create as many servers and databases as necessary.| |[Bulk schema deployment with MSSQL-Scripter and PowerShell](https://www.microsoft.com/download/details.aspx?id=103032)|This asset creates a resource group, creates one or multiple [logical servers in Azure](../../database/logical-servers.md) to host Azure SQL Database, exports every schema from an on-premises SQL Server instance (or multiple SQL Server 2005+ instances), and imports the schemas to Azure SQL Database.| |[Convert SQL Server Agent jobs into elastic database jobs](https://www.microsoft.com/download/details.aspx?id=103123)|This script migrates your source SQL Server Agent jobs to elastic database jobs.| |[Send emails from Azure SQL Database](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/AF%20SendMail)|This solution is an alternative to SendMail capability and is available for on-premises SQL Server. It uses Azure Functions and the SendGrid service to send emails from Azure SQL Database.| |[Utility to move on-premises SQL Server logins to Azure SQL Database](https://www.microsoft.com/download/details.aspx?id=103111)|A PowerShell script can create a T-SQL command script to re-create logins and select database users from on-premises SQL Server to Azure SQL Database. The tool allows automatic mapping of Windows Server Active Directory accounts to Azure AD accounts, along with optionally migrating SQL Server native logins.| |[Perfmon data collection automation by using Logman](https://www.microsoft.com/download/details.aspx?id=103114)|You can use the Logman tool to collect Perfmon data (to help you understand baseline performance) and get migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server instance.|
-|[Database migration to Azure SQL Database by using BACPAC](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Database%20migrations%20-%20Benchmarks%20and%20Steps%20to%20Import%20to%20Azure%20SQL%20DB%20Single%20Database%20from%20BACPAC.pdf)|This white paper provides guidance and steps to help accelerate migrations from SQL Server to Azure SQL Database by using BACPAC files.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
For additional assistance, see the following resources, which were developed in
|Asset |Description | |||
-|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
+|[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
|[Db2 zOS data assets discovery and assessment package](https://www.microsoft.com/download/details.aspx?id=103108)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including \*.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.| |[IBM Db2 LUW inventory scripts and artifacts](https://www.microsoft.com/download/details.aspx?id=103109)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[IBM Db2 to SQL MI - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|The Database Compare utility is a Windows console application that you can use to verify that the data is identical both on source and target platforms. You can use the tool to efficiently compare data down to the row or column level in all or selected tables, rows, and columns.|
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
For other migration guides, see [Azure Database Migration Guides](/data-migratio
Before you begin migrating your Oracle schema to SQL Managed Instance: - Verify your source environment is supported.-- Download [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- Download [SSMA for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
- Have a [SQL Managed Instance](../../managed-instance/instance-create-quickstart.md) target. - Obtain the [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
By using SSMA for Oracle, you can review database objects and data, assess datab
To create an assessment:
-1. Open [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Open [SSMA for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
1. Select **File**, and then select **New Project**. 1. Enter a project name and a location to save your project. Then select **Azure SQL Managed Instance** as the migration target from the drop-down list and select **OK**.
For more assistance with completing this migration scenario, see the following r
| **Title/link** | **Description** | | - | -- |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
-| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
-| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run an SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
-| [SSMA for Oracle Common Errors and How to Fix Them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a nonscalar condition in the WHERE clause. However, SQL Server doesn't support this type of condition. As a result, SSMA for Oracle doesn't convert queries with a nonscalar condition in the WHERE clause. Instead, it generates the error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
-| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server Database. If the migration requires changes to features or functionality, the possible impact of each change on the applications that use the database must be considered carefully. |
+| [Data Workload Assessment Model and Tool](https://www.microsoft.com/download/details.aspx?id=103130) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://www.microsoft.com/download/details.aspx?id=103121) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://www.microsoft.com/download/details.aspx?id=103120) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run an SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
|[Oracle to SQL MI - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|SSMA for Oracle Tester is the recommended tool to automatically validate the database object conversion and data migration, and it's a superset of Database Compare functionality.<br /><br />If you're looking for an alternative data validation option, you can use the Database Compare utility to compare data down to the row or column level in all or selected tables, rows, and columns.| The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
The Data SQL Engineering team developed these resources. This team's core charte
- To learn more about SQL Managed Instance, see: - [An overview of Azure SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md)
- - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/)
+ - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
- To learn more about the framework and adoption cycle for cloud migrations, see: - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
For more assistance, see the following resources that were developed for real-wo
|Asset |Description | ||| |[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and an application/database remediation level for a workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform decision process for target platforms.|
-|[DBLoader utility](https://github.com/microsoft/DataMigrationTeam/tree/master/DBLoader%20Utility)|You can use DBLoader to load data from delimited text files into SQL Server. This Windows console utility uses the SQL Server native client bulk-load interface. The interface works on all versions of SQL Server, along with Azure SQL Managed Instance.|
|[Utility to move on-premises SQL Server logins to Azure SQL Managed Instance](https://www.microsoft.com/download/details.aspx?id=103111)|A PowerShell script can create a T-SQL command script to re-create logins and select database users from on-premises SQL Server to Azure SQL Managed Instance. The tool allows automatic mapping of Windows Server Active Directory accounts to Azure AD accounts, along with optionally migrating SQL Server native logins.| |[Perfmon data collection automation by using Logman](https://www.microsoft.com/download/details.aspx?id=103114)|You can use the Logman tool to collect Perfmon data (to help you understand baseline performance) and get migration target recommendations. This tool uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server instance.|
-|[Database migration to Azure SQL Managed Instance by restoring full and differential backups](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Database%20migrations%20to%20Azure%20SQL%20DB%20Managed%20Instance%20-%20%20Restore%20with%20Full%20and%20Differential%20backups.pdf)|This white paper provides guidance and steps to help accelerate migrations from SQL Server to Azure SQL Managed Instance if you have only full and differential backups (and no log backup capability).|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
For additional assistance, see the following resources, which were developed in
|Asset |Description | |||
-|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
+|[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
|[Db2 zOS data assets discovery and assessment package](https://www.microsoft.com/download/details.aspx?id=103108)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including \*.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.| |[IBM Db2 LUW inventory scripts and artifacts](https://www.microsoft.com/download/details.aspx?id=103109)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.| |[IBM Db2 to SQL Server - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|The Database Compare utility is a Windows console application that you can use to verify that the data is identical both on source and target platforms. You can use the tool to efficiently compare data down to the row or column level in all or selected tables, rows, and columns.|
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
For other migration guides, see [Database Migration](/data-migration).
To migrate your Oracle schema to SQL Server on Azure Virtual Machines, you need: - A supported source environment.-- [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
- A target [SQL Server VM](../../virtual-machines/windows/sql-vm-create-portal-quickstart.md). - The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and the [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql). - Connectivity and sufficient permissions to access the source and the target.
To use MAP Toolkit to do an inventory scan, follow these steps:
### Assess
-After you identify the data sources, use [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258) to assess the Oracle instances migrating to the SQL Server VM. The assistant will help you understand the gaps between the source and destination databases. You can review database objects and data, assess databases for migration, migrate database objects to SQL Server, and then migrate data to SQL Server.
+After you identify the data sources, use [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/download/details.aspx?id=54258) to assess the Oracle instances migrating to the SQL Server VM. The assistant will help you understand the gaps between the source and destination databases. You can review database objects and data, assess databases for migration, migrate database objects to SQL Server, and then migrate data to SQL Server.
To create an assessment, follow these steps:
-1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
1. On the **File** menu, select **New Project**. 1. Provide a project name and a location for your project, and then select a SQL Server migration target from the list. Select **OK**:
For more help with completing this migration scenario, see the following resourc
| **Title/Link** | **Description** | | - | -- |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested best-fit target platforms, cloud readiness, and application/database remediation levels for a given workload. It offers simple one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target-platform decision process. |
-| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that targets Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
-| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the XML files that you need to run an SSMA assessment in console mode. You provide the source.csv file by taking an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [Data Workload Assessment Model and Tool](https://www.microsoft.com/download/details.aspx?id=103130) | This tool provides suggested best-fit target platforms, cloud readiness, and application/database remediation levels for a given workload. It offers simple one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target-platform decision process. |
+| [Oracle Inventory Script Artifacts](https://www.microsoft.com/download/details.aspx?id=103121) | This asset includes a PL/SQL query that targets Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://www.microsoft.com/download/details.aspx?id=103120) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the XML files that you need to run an SSMA assessment in console mode. You provide the source.csv file by taking an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
| [SSMA issues and possible remedies when migrating Oracle databases](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in a WHERE clause. SQL Server doesn't support this type of condition. So SSMA for Oracle doesn't convert queries that have a non-scalar condition in the WHERE clause. Instead, it generates an error: O2SS0001. This white paper provides details on the problem and ways to resolve it. | | [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server. If the migration requires changes to features/functionality, you need to carefully consider the possible effect of each change on the applications that use the database. | |[Oracle to SQL Server - Database Compare utility](https://www.microsoft.com/download/details.aspx?id=103016)|SSMA for Oracle Tester is the recommended tool to automatically validate the database object conversion and data migration, and it's a superset of Database Compare functionality.<br /><br />If you're looking for an alternative data validation option, you can use the Database Compare utility to compare data down to the row or column level in all or selected tables, rows, and columns.|
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
For additional assistance, see the following resources that were developed for r
||| |[Data workload assessment model and tool](https://www.microsoft.com/download/details.aspx?id=103130)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Perfmon data collection automation using Logman](https://www.microsoft.com/download/details.aspx?id=103114)|A tool that collects Perform data to understand baseline performance that assists in the migration target recommendation. This tool that uses logman.exe to create the command that will create, start, stop, and delete performance counters set on a remote SQL Server.|
-|[SQL Server Deployment in Azure](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/SQL%20Server%20Deployment%20in%20Azure%20.pdf)|This guidance whitepaper assists in reviewing various options to move your SQL Server workloads to Azure including feature comparison, high availability and backup / storage considerations. |
-|[On-Premise SQL Server to Azure virtual machine](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/OnPremise%20SQL%20Server%20to%20Azure%20VM.pdf)|This whitepaper outlines the steps to backup and restore databases from on-premises SQL Server to SQL Server on Azure virtual machine using sample scripts.|
|[Multiple-SQL-VM-VNet-ILB](https://www.microsoft.com/download/details.aspx?id=103104)|This whitepaper outlines the steps to setup multiple Azure virtual machines in a SQL Server Always On Availability Group configuration.| |[Azure virtual machines supporting Ultra SSD per Region](https://www.microsoft.com/download/details.aspx?id=103105)|These PowerShell scripts provide a programmatic option to retrieve the list of regions that support Azure virtual machines supporting Ultra SSDs.|
azure-video-analyzer Player Widget https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/player-widget.md
In this section, we will create a JWT token that we will use later in the articl
> [!NOTE] > If you are familiar with how to generate a JWT token based on either an RSA or ECC certificate, you can skip this section.
-1. Download the [JWTTokenIssuer application](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp/tree/main/src/jwt-token-issuer/).
+1. Clone the [AVA C# samples repository](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp). Then, go to the JWTTokenIssuer application folder *src/jwt-token-issuer* and find the JWTTokenIssuer application.
> [!NOTE] > For more information about configuring your audience values, see [Access policies](./access-policies.md).
In this section, we will create a JWT token that we will use later in the articl
:::image type="content" source="media/player-widget/client-api-url.png" alt-text="Screenshot that shows the player widget endpoint."::: 5. On line 78, change the issuer to the issuer value of your certificate. Example: `https://contoso.com`
-6. Save the file.
-7. Select `F5` to run the JWTTokenIssuer application.
+6. Save the file.
> [!NOTE] > You might be prompted with the message `Required assets to build and debug are missing from 'jwt token issuer'. Add them?` Select `Yes`. :::image type="content" source="media/player-widget/visual-studio-code-required-assets.png" alt-text="Screenshot that shows the required asset prompt in Visual Studio Code.":::-
+
+7. Open a Command Prompt window and go to the folder with the JWTTokenIssuer files. Run the following two commands: `dotnet build`, followed by `dotnet run`. If you have the C# extension on Visual Studio Code, you also can select F5 to run the JWTTokenIssuer application.
The application builds and then executes. After it builds, it creates a self-signed certificate and generates the JWT token information from that certificate. You also can run the JWTTokenIssuer.exe file that's located in the debug folder of the directory where the JWTTokenIssuer built from. The advantage of running the application is that you can specify input options as follows:
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Previously updated : 07/01/2021 Last updated : 08/01/2021
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
* Bug fixes * Deprecated functionality
+## July 2021
+
+### Automatic Scaling of Media Reserved Units
+
+Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Media Reserved Units (MRUs)](../../media-services/latest/concept-media-reserved-units.md) auto scaling by [Azure Media Services](../../media-services/latest/media-services-overview.md), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
+ ## June 2021 ### Video Analyzer for Media deployed in six new regions
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-restore-files-from-vm.md
Also, ensure that you have the [right machine to execute the ILR script](#step-2
If you run the script on a computer with restricted access, ensure there's access to: -- `download.microsoft.com` or `AzureFrontDoor.FirstParty` service tag in NSG-- Recovery Service URLs (GEO-NAME refers to the region where the Recovery Services vault resides)
+- `download.microsoft.com` or `AzureFrontDoor.FirstParty` service tag in NSG on port 443 (outbound)
+- Recovery Service URLs (GEO-NAME refers to the region where the Recovery Services vault resides) on port 3260 (outbound)
- `https://pod01-rec2.GEO-NAME.backup.windowsazure.com` (For Azure public regions) or `AzureBackup` service tag in NSG - `https://pod01-rec2.GEO-NAME.backup.windowsazure.cn` (For Azure China 21Vianet) or `AzureBackup` service tag in NSG - `https://pod01-rec2.GEO-NAME.backup.windowsazure.us` (For Azure US Government) or `AzureBackup` service tag in NSG - `https://pod01-rec2.GEO-NAME.backup.windowsazure.de` (For Azure Germany) or `AzureBackup` service tag in NSG-- Outbound ports 53 (DNS), 443, 3260
+- Public DNS resolution on port 53 (outbound)
> [!NOTE] >
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/available-sizes.md
To change the size of an existing role, change the virtual machine size in the s
To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters: -
-`ResourceType = virtualMachines ` <br>
-`VMDeploymentTypes = PaaS `
-
+```powershell
+ # Update the location
+ $location = 'WestUS2'
+ # Get all Compute Resource Skus
+ $allSkus = Get-AzComputeResourceSku
+ # Filter virtualMachine skus for given location
+ $vmSkus = $allSkus.Where{$_.resourceType -eq 'virtualMachines' -and $_.LocationInfo.Location -like $location}
+ # From filtered virtualMachine skus, select PaaS Skus
+ $passVMSkus = $vmSkus.Where{$_.Capabilities.Where{$_.name -eq 'VMDeploymentTypes'}.Value.Contains("PaaS")}
+ # Optional step to format and sort the output by Family
+ $passVMSkus | Sort-Object Family, Name | Format-Table -Property Family, Name, Size
+```
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/whats-new.md
We've also added links to some user-generated content. Those items will be marke
### July 2021 * Multivariate anomaly detection APIs deployed in four more regions: Australia East, Canada Central, North Europe, and Southeast Asia. Now in total 10 regions are supported.
+* Anomaly Detector (univariate) available in West US 3 and Norway East.
+ ### June 2021
We've also added links to some user-generated content. Those items will be marke
* [IoT Edge module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-cognitive-service.edge-anomaly-detector) (univariate) published. * Anomaly Detector (univariate) available in Azure China (China East 2).
-* Multivariate anomaly detection APIs preview in selected regions (West US2, West Europe).
+* Multivariate anomaly detection APIs preview in selected regions (West US 2, West Europe).
### September 2020
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Previously updated : 03/29/2021 Last updated : 07/30/2021 zone_pivot_groups: programming-languages-computer-vision
cognitive-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library.md
Previously updated : 03/29/2021 Last updated : 07/30/2021 zone_pivot_groups: programming-languages-computer-vision
cognitive-services Howtodetectfacesinimage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md
To learn more about each of the attributes, see the [Face detection and attribut
## Next steps
-In this guide, you learned how to use the various functionalities of face detection. Next, integrate these features into an app to add face data from users.
+In this guide, you learned how to use the various functionalities of face detection and analysis. Next, integrate these features into an app to add face data from users.
- [Tutorial: Add users to a Face service](../enrollment-overview.md)
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
Title: How to mitigate latency when using the Face service
description: Learn how to mitigate latency when using the Face service. -++ Last updated 1/5/2021+ # How to: mitigate latency when using the Face service
Mitigations:
var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg"); ``` - Consider uploading a smaller file.
- - See the guidelines regarding [input data for face detection](../concepts/face-detection.md#input-data) and [input data for face recognition](../concepts/face-recognition.md#input-data).
- - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When using detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
- - For face recognition, reducing the face size to 200x200 pixels does not affect the accuracy of the recognition model.
- - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
- - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
+ - See the guidelines regarding [input data for face detection](../concepts/face-detection.md#input-data) and [input data for face recognition](../concepts/face-recognition.md#input-data).
+ - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When using detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
+ - For face recognition, reducing the face size to 200x200 pixels does not affect the accuracy of the recognition model.
+ - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
+ - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
```csharp var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg"); var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg");
cognitive-services How To Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-use-large-scale.md
# Example: Use the large-scale feature
-This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [Face recognition](../concepts/face-recognition.md) conceptual guide.
+This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concepts/face-recognition.md) conceptual guide.
LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
In this guide, you learned how to migrate the existing PersonGroup or FaceList c
Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup. - [Add faces](how-to-add-faces.md)-- [Face client library quickstart](../Quickstarts/client-libraries.md)
+- [Face client library quickstart](../Quickstarts/client-libraries.md)
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-detection-model.md
The best way to compare the performances of the detection models is to use them
## Next steps
-In this article, you learned how to specify the detection model to use with different Face APIs. Next, follow a quickstart to get started using face detection.
+In this article, you learned how to specify the detection model to use with different Face APIs. Next, follow a quickstart to get started with face detection and analysis.
* [Face .NET SDK](../quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts/client-libraries.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-recognition-model.md
If you normally specify a confidence threshold (a value between zero and one tha
## Next steps
-In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started using face detection.
+In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started with face detection.
* [Face .NET SDK](../quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts/client-libraries.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
In this article, you learned how to specify the recognition model to use with di
[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d [FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b [FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
-[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
keywords: facial recognition, facial recognition software, facial analysis, face
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
-The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as security, natural user interface, image content analysis and management, mobile apps, and robotics.
+The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
-The Face service provides several different facial analysis functions which are each outlined in the following sections.
+**Identity verification** checks that a new (remote) user is who they claim to be by matching their face against the photo on their identity document. It is commonly used in the gig economy, banking and online education industries.
+
+**Face analysis** locates human faces in an image and returns different kinds of face-related data, such as whether the person is wearing a mask, glasses, facial hair, etc.
This documentation contains the following types of articles: * The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This documentation contains the following types of articles:
* The [conceptual articles](./concepts/face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
-## Face detection
+## Face detection and analysis
-The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes, such as head pose, gender, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications.
+Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data, which is used in later operations to identify or verify faces.
-> [!NOTE]
-> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to do further Face operations like Identify, Verify, Find Similar, or Group, you should use this Face service instead.
+Optionally, face detection can also extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service (for example, your application could advise users to take off their sunglasses if the user is wearing sunglasses).
-For more information on face detection, see the [Face detection](concepts/face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
+> [!NOTE]
+> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to use other Face operations like Identify, Verify, Find Similar, or Face grouping, you should use this service instead.
-## Face verification
+For more information on face detection and analysis, see the [Face detection](concepts/face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
-The Verify API builds on Detection and addresses the question, "Are these two images the same person?". Verification is also called "one-to-one" matching because the probe image is compared to only one enrolled template. Verification can be used in identity verification or access control scenarios to verify a picture matches a previously captured image (such as from a photo from a government issued ID card). For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) reference documentation.
-## Face identification
+## Identity verification
-The Identify API also starts with Detection and answers the question, "Can this detected face be matched to any enrolled face in a database?" Because it's like face recognition search, is also called "one-to-many" matching. Candidate matches are returned based on how closely the probe template with the detected face matches each of the enrolled templates.
+Modern enterprises and apps can use the the Face identification and Face verification operations to verify that a user is who they claim to be. Face identification can be thought of as "one-to-many" matching. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building access to a certain group of people or verifying the user of a device.
The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered. ![A grid with three columns for different people, each with three rows of face images](./Images/person.group.clare.jpg)
-After you create and train a database, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+After you create and train a group, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+
+### Verification
+
+The verification operation answers the question, "Do these two faces belong to the same person?". Verification is also called "one-to-one" matching because the probe face data is compared to only a single enrolled face. Verification is used in the identification scenario to double-check that a given match is accurate.
+
+For more information about identity verification, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
-For more information about person identification, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Identify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) reference documentation.
## Find similar faces
-The Find Similar API does face matching between target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This operation is useful for doing a face search by image.
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
-Two working modes, **matchPerson** and **matchFace**, are supported. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
+The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
The following example shows the target face:
And these images are the candidate faces:
To find four similar faces, the **matchPerson** mode returns a and b, which show the same person as the target face. The **matchFace** mode returns a, b, c, and d&mdash;exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Find Similar API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) reference documentation.
-## Face grouping
+## Group faces
-The Group API divides a set of unknown faces into several groups based on similarity. Each group is a disjoint proper subset of the original set of faces. All of the faces in a group are likely to belong to the same person. There can be several different groups for a single person. The groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
+The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
+All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
-## Sample apps
+## Sample app
The following sample applications show a few ways to use the Face service: -- [Face API: Windows Client Library and sample](https://github.com/Microsoft/Cognitive-Face-Windows) is a WPF app that demonstrates several scenarios of Face detection, analysis, and identification. - [FamilyNotes UWP app](https://github.com/Microsoft/Windows-appsample-familynotes) is a Universal Windows Platform (UWP) app that uses face identification along with speech, Cortana, ink, and camera in a family note-sharing scenario. ## Data privacy and security
cognitive-services Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/concepts/face-recognition.md
Title: "Face recognition concepts"
-description: This article explains the concepts of the Verify, Find Similar, Group, and Identify face recognition operations and the underlying data structures.
+description: This article explains the concept of Face recognition, its related operations, and the underlying data structures.
# Face recognition concepts
-This article explains the concepts of the Verify, Find Similar, Group, and Identify face recognition operations and the underlying data structures. Broadly, recognition describes the work of comparing two different faces to determine if they're similar or belong to the same person.
+This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, Face recognition refers to the method of verifying or identifying an individual using their face. Verification is one-to-one matching that takes two faces and returns whether they are the same face, and identification is one-to-many matching that takes a single face as input and returns a set of matching candidates. Face recognition is important in implementing the identity verification scenario, which enterprises and apps use to verify that a (remote) user is who they claim to be.
-## Recognition-related data structures
+## Related data structures
-The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription. Name fields can be duplicated.
+The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription. Name fields may be duplicated.
|Name|Description| |:--|:--|
The recognition operations use mainly the following data structures. These objec
## Recognition operations
-This section details how the four recognition operations use the data structures previously described. For a broad description of each recognition operation, see [Overview](../Overview.md).
+This section details how the underlying operations use the data structures previously described to identify and verify a face.
-### Verify
+### PersonGroup creation and training
-The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a face ID from DetectedFace or PersistedFace and either another face ID or a Person object and determines whether they belong to the same person. If you pass in a Person object, you can optionally pass in a PersonGroup to which that Person belongs to improve performance.
+You need to create a [PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) to store the set of people to match against. PersonGroups hold [Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) objects, which each represent an individual person and hold a set of face data belonging to that person.
-### Find Similar
+The [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) operation prepares the data set to be used in face data comparisons.
-The [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation takes a face ID from DetectedFace or PersistedFace and either a FaceList or an array of other face IDs. With a FaceList, it returns a smaller FaceList of faces that are similar to the given face. With an array of face IDs, it similarly returns a smaller array.
+### Identification
-### Group
+The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
-The [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) operation takes an array of assorted face IDs from DetectedFace or PersistedFace and returns the same IDs grouped into several smaller arrays. Each "groups" array contains face IDs that appear similar. A single "messyGroup" array contains face IDs for which no similarities were found.
-### Identify
+### Verification
+
+The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
-The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several face IDs from DetectedFace or PersistedFace and a PersonGroup and returns a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
## Input data
cognitive-services Named Entity Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/named-entity-types.md
Previously updated : 06/08/2021 Last updated : 08/02/2021
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
# Teams interoperability > [!IMPORTANT]
-> BYOI interoperability is in public preview and broadly available on request. To enable/disable [Teams tenant interoperability](../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
+> BYOI interoperability is in public preview and available to all Communication Services applications and Teams organizations.
> > Microsoft 365 authenticated interoperability is in private preview, and restricted using service controls to Azure Communication Services early adopters. To join early access program, complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8MfnD7fOYZEompFbYDoD4JUMkdYT0xKUUJLR001ODdQRk1ITTdOMlRZNSQlQCN0PWcu). >
Azure Communication Services supports two types of Teams interoperability depend
Applications can implement both authentication schemes and leave the choice of authentication up to the end user. ## Bring your own identity
-Bring your own identity (BYOI) is the common model for using Azure Communication Services and Teams interoperability. It supports any identity provider and authentication scheme. Your app can join Microsoft Teams meetings, and Teams will treat these users as anonymous external accounts. The name of Communication Services users displayed in Teams is configurable via the Communication Services Calling SDK.
-This capability is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application experience) into a meeting experience. Meeting details that need to be shared with external users of your application can be retrieved via the Graph API or from the calendar in Microsoft Teams.
+Bring your own identity (BYOI) is the common model for using Azure Communication Services and Teams interoperability. It supports any identity provider and authentication scheme. The first scenario that has been enabled allows your application to join Microsoft Teams meetings, and Teams will treat these users as anonymous external accounts, the same as users that join using the Teams anonymous web application. This is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application experience) into a meeting experience. In the future we will be enabling additional scenarios including direct calling and chat which will allow your application to initiate calls and chats with Teams users outside the context of a Teams meeting.
+
+The ability for Communication Services users to join Teams meetings as anonymous users is controlled by the existing "allow anonymous meeting join" configuration, which also controls the existing Teams anonymous meeting join. This setting can be updated in the Teams admin center (https://admin.teams.microsoft.com/meetings/settings) or with the Teams PowerShell cmdlet Set-CsTeamsMeetingConfiguration (https://docs.microsoft.com/powershell/module/skype/set-csteamsmeetingconfiguration). As with Teams anonymous meeting join, your application must have the meeting link to join, which can be retrieved via the Graph API or from the calendar in Microsoft Teams. The name of Communication Services users displayed in Teams is configurable via the Communication Services Calling SDK.
External users will be able to use core audio, video, screen sharing, and chat functionality via Azure Communication Services SDKs. Features such as raised hand, together mode, and breakout rooms will only be available for Teams users. Communication Services users can send and receive messages only while present in the Teams meeting and if the meeting is not scheduled for a channel.
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
The Azure Communication Services Calling SDK uses the following error codes to h
| Error code | Description | Action to take | | -- | | |
-| 403 | Forbidden / Authentication failure. | Ensure that your Communication Services token is valid and not expired. If you are using Teams Interoperability, make sure your Teams tenant has been added to the preview access allowlist. To enable/disable [Teams tenant interoperability](./teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).|
+| 403 | Forbidden / Authentication failure. | Ensure that your Communication Services token is valid and not expired. |
| 404 | Call not found. | Ensure that the number you're calling (or call you're joining) exists. | | 408 | Call controller timed out. | Call Controller timed out waiting for protocol messages from user endpoints. Ensure clients are connected and available. | | 410 | Local media stack or media infrastructure error. | Ensure that you're using the latest SDK in a supported environment. |
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
zone_pivot_groups: acs-web-ios-android
# Quickstart: Join your chat app to a Teams meeting
-> [!IMPORTANT]
-> To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
++ Get started with Azure Communication Services by connecting your chat solution to Microsoft Teams.
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
zone_pivot_groups: acs-plat-web-ios-android-windows
[!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)]
-> [!IMPORTANT]
-> To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
- Get started with Azure Communication Services by connecting your calling solution to Microsoft Teams using the JavaScript SDK. ::: zone pivot="platform-web"
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-pull-model.md
ms.devlang: dotnet Previously updated : 07/08/2021 Last updated : 08/02/2021
Here's some key differences between the change feed processor and pull model:
You can create a `FeedIterator` to process the change feed using the pull model. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed using that specific `FeedIterator`.
-You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. The `PageSizeHint` is the maximum number of items that will be returned in a single page.
+You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed
+through stored procedures, transaction scope is preserved when reading items from the Change Feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of
+one atomic batch.
The `FeedIterator` comes in two flavors. In addition to the examples below that return entity objects, you can also obtain the response with `Stream` support. Streams allow you to read data without having it first deserialized, saving on client resources.
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-migrationchoices.md
The following factors determine the choice of the migration tool:
|||||| |Online|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB|Azure Cosmos DB API for MongoDB |&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.| |Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB| Azure Cosmos DB API for MongoDB| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
-|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB <br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources. | &bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB <br/>&bull; JSON files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets.| &bull; Easy to set up and supports multiple sources. <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process.<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process. <br/>&bull; Needs custom code to increase read throughput for certain data sources.|
+|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db-mongodb-api.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB <br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources. | &bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB <br/>&bull; JSON files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets.| &bull; Easy to set up and supports multiple sources. <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process.<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process. <br/>&bull; Needs custom code to increase read throughput for certain data sources.|
|Offline|[Existing Mongo Tools (mongodump, mongorestore, Studio3T)](https://azure.microsoft.com/resources/videos/using-mongodb-tools-with-azure-cosmos-db/)|MongoDB | Azure Cosmos DB API for MongoDB| &bull; Easy to set up and integration. <br/>&bull; Needs custom handling for throttles.| ## Azure Cosmos DB Cassandra API
cosmos-db Global Dist Under The Hood https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/global-dist-under-the-hood.md
The service allows you to configure your Cosmos databases with either a single w
## Conflict resolution
-Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.cs.utexas.edu/~lorenzo/corsi/cs395t/04S/notes/naor98load.pdf) and the [Bayou](https://people.cs.umass.edu/~mcorner/courses/691M/papers/terry.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Cosmos DBΓÇÖs system design, they have also undergone significant transformation as we applied them to the Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Cosmos DB delivers to its customers.
+Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.kth.se/social/upload/51647982f276546170461c46/4-gossip.pdf) and the [Bayou](https://people.cs.umass.edu/~mcorner/courses/691M/papers/terry.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Cosmos DBΓÇÖs system design, they have also undergone significant transformation as we applied them to the Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Cosmos DB delivers to its customers.
Recall that a partition-set is distributed across multiple regions and follows Cosmos DBs (multi-region writes) replication protocol to replicate the data among the physical partitions comprising a given partition-set. Each physical partition (of a partition-set) accepts writes and serves reads typically to the clients that are local to that region. Writes accepted by a physical partition within a region are durably committed and made highly available within the physical partition before they are acknowledged to the client. These are tentative writes and are propagated to other physical partitions within the partition-set using an anti-entropy channel. Clients can request either tentative or committed writes by passing a request header. The anti-entropy propagation (including the frequency of propagation) is dynamic, based on the topology of the partition-set, regional proximity of the physical partitions, and the consistency level configured. Within a partition-set, Cosmos DB follows a primary commit scheme with a dynamically selected arbiter partition. The arbiter selection is dynamic and is an integral part of the reconfiguration of the partition-set based on the topology of the overlay. The committed writes (including multi-row/batched updates) are guaranteed to be ordered.
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-visualization-partners.md
+
+ Title: Visualize Azure Cosmos DB Gremlin API data using partner solutions
+description: Learn how to integrate Azure Cosmos DB graph data with different third-party visualization solutions.
+++++ Last updated : 07/22/2021++
+# Visualize graph data stored in Azure Cosmos DB Gremlin API with data visualization solutions
+
+You can visualize data stored in Azure Cosmos DB Gremlin API by using various data visualization solutions. The following solutions are recommended by the [Apache Tinkerpop community](https://tinkerpop.apache.org/#poweredby) for graph data visualization.
+
+## Linkurious Enterprise
+
+[Linkurious Enterprise](https://linkurio.us/product/) uses graph technology and data visualization to turn complex datasets into interactive visual networks. The platform connects to your data sources and enables investigators to seamlessly navigate across billions of entities and relationships. The result is a new ability to detect suspicious relationships without juggling with queries or tables.
+
+The interactive interface of Linkurious Enterprise offers an easy way to investigate complex data. You can search for specific entities, expand connections to uncover hidden relationships, and apply layouts of your choice to untangle complex networks. Linkurious Enterprise is now compatible with Azure Cosmos DB Gremlin API. It's suitable for end-to-end graph visualization scenarios and supports read and write capabilities from the user interface. You can request a [demo of Linkurious with Azure Cosmos DB](https://linkurio.us/contact/)
++
+<b>Figure:</b> Linkurious Enterprise visualization flow
+### Useful links
+
+* [Product details](https://linkurio.us/product/)
+* [Documentation](https://doc.linkurio.us/)
+* [Demo](https://resources.linkurio.us/demo)
+* [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/linkurious.linkurious001?tab=overview)
+
+## Cambridge Intelligence
+
+[Cambridge IntelligenceΓÇÖs](https://cambridge-intelligence.com/products/) graph visualization toolkits support Azure Cosmos DB. The following two visualization toolkits are supported by Azure Cosmos DB:
+
+* [KeyLines for JavaScript developers](https://cambridge-intelligence.com/keylines/)
+
+* [Re-Graph for React developers](https://cambridge-intelligence.com/regraph/)
++
+<b>Figure:</b> KeyLines visualization example at various levels of detail.
+
+These toolkits let you design high-performance graph visualization and analysis applications. They harness powerful Web Graphics Library(WebGL) rendering and carefully crafted code to give users a fast and insightful visualization experience. These tools are compatible with any browser, device, server or database, and come with step-by-step tutorials, fully documented APIs, and interactive demos.
++
+<b>Figure:</b> Re-Graph visualization example at various levels of details
+### Useful links
+
+* [Try the toolkits](https://cambridge-intelligence.com/try/)
+* [KeyLines technology overview](https://cambridge-intelligence.com/keylines/technology/)
+* [Re-Graph technology overview](https://cambridge-intelligence.com/regraph/technology/)
+* [Graph visualization use cases](https://cambridge-intelligence.com/use-cases/)
+
+## Tom Sawyer
+
+[Tom Sawyer Perspectives](https://www.tomsawyer.com/perspectives/) is a robust platform for building enterprise grade graph data visualization and analysis applications. It is a low-code graph & data visualization development platform, which includes integrated design, preview interface, and extensive API libraries. The platform integrates enterprise data sources with powerful graph visualization, layout, and analysis technology to solve big data problems.
+
+Perspectives enables developers to quickly develop production-quality, data-oriented visualization applications. Two graphic modules, the "Designer" and the "Previewer" are used to build applications to visualize and analyze the specific data that drives each project. When used together, the Designer and Previewer provide an efficient round-trip process that dramatically speeds up application development. To visualize Azure Cosmos DB Gremlin API data using this platform, request a [free 60-day evaluation](https://www.tomsawyer.com/get-started) of this tool.
++
+<b>Figure:</b> Tom Sawyer Perspectives in action
+
+[Tom Sawyer Graph Database Browser](https://www.tomsawyer.com/graph-database-browser/) makes it easy to visualize and analyze data in Azure Cosmos DB Gremlin API. The Graph Database Browser helps you see and understand connections in your data without extensive knowledge of the query language or the schema. You can manually define the schema for your project or use schema extraction to create it. So, even less technical users can interact with the data by loading the neighbors of selected nodes and building the visualization in whatever direction they need. Advanced users can execute queries using Gremlin, Cypher, or SPARQL to gain other insights. When you define the schema then you can load the Azure Cosmos DB data into the Perspectives model. With the help of integrator definition, you can specify the location and configuration for the Gremlin endpoint. Later you can bind elements from the Azure Cosmos DB data source to elements in the Perspectives model and visualize your data.
+
+Users of all skill levels can take advantage of five unique graph layouts to display the graph in a way that provides the most meaning. And there are built-in centrality, clustering, and path-finding analyses to reveal previously unseen patterns. Using these techniques, organizations can identify critical patterns in areas like fraud detection, customer intelligence, and cybersecurity. Pattern recognition is very important for network analysts in areas such as general IT and network management, logistics, legacy system migration, and business transformation. Try a live demo of Tom Sawyer Graph Database Browser.
++
+<b>Figure:</b> Tom Sawyer Database Browser's visualization capabilities
+### Useful links
+
+* [Documentation](https://www.tomsawyer.com/graph-database-browser/)
+
+* [Trial for Tom Sawyer Perspectives](https://www.tomsawyer.com/get-started)
+
+* [Live Demo for Tom Sawyer Databrowser](https://support.tomsawyer.com/demonstrations/graph.database.browser.demo/)
+
+* [Deploy on Azure](https://www.tomsawyer.com/cs/c/?cta_guid=b85cf3fc-2978-426d-afb3-c1f858f38e73&signature=AAH58kGNc5criGRMHSUptSOwyD0Znf3lFw&pageId=41375082967&placement_guid=d6cb1de7-6d51-4a89-a012-5a167870a715&click=7bc863ee-3c45-4509-9334-ac7674b7e75e&hsutk=4fa7e492076c5cecf5f03faad22b4a19&canon=https%3A%2F%2Fwww.tomsawyer.com%2Fgraph-database-browser&utm_referrer=https%3A%2F%2Fwww.tomsawyer.com%2F&portal_id=8313502&redirect_url=APefjpF0sV6YjeRqi4bQCt0-ubf_cmTi_nSs28RvMy55Vk01NIf6jtTaTj3GUMJ9D9z5DvIwvPnfSw89Wj9JCS_7cNss_HxsDmlT7wmeJh7BUyuPNEGYGnhucgeUZUzWGqrEeWmReCZByeMdklbMuikFnwasX6046Op7hKKiuQJx84RGd4fe1Rvq7mRLaaySZxdvLlpMg13N_4xo_GzrHRl4P2_VGZGPRUgkS3EvsvLzfJzH36u2HHDSG6AuU9ZRNgiJiH2wMLAgGQT-vDzkSTnYRb0ljRFHCq9kPjsbVDw1bTn0G9R5ZmTbdskypc49-Ob_49MdHif1ufRA9BMLU3Ks6t9TCVJ6fo4R5255u5FK2_v3Jk10yd7y_EhLqzrAv2ov-TzxDd6b&__hstc=169273150.4fa7e492076c5cecf5f03faad22b4a19.1608290688565.1626359177649.1626364757376.11&__hssc=169273150.1.1626364757376&__hsfp=3487988390&contentType=standard-page)
+
+## Graphistry
+
+Graphistry automatically transforms your data into interactive, visual investigation maps built for the needs of analysts. It can quickly surface relationships between events and entities without having to write queries or wrangle data. You can harness your data without worrying about scale. You can detect security, fraud, and IT investigations to 3600 views of customers and supply chains, Graphistry turns the potential of your data into human insight and value.
++
+<b>Figure:</b> Graphistry Visualization snapshot
+
+With the Graphistry's GPU client/cloud technology, you can do interactive visualization. By using their standard browser and the cloud, you can use all the data you want, and still remain fast, responsive, and interactive. If you want to run the browser on your hardware, itΓÇÖs as easy as installing a Docker. That way you get the analytical power of GPUs without having to think about GPUs.
++
+<b>Figure:</b> Graphistry in action
+
+### Useful links
+
+* [Documentation](https://www.graphistry.com/docs)
+
+* [Video Guides](https://www.graphistry.com/videos)
+
+* [Deploy on Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/graphistry.graphistry-core-2-24-9)
+
+## Graphlytic
+
+Graphlytic is a highly customizable web application for graph visualization and analysis. Users can interactively explore the graph, look for patterns with the Gremlin language, or use filters to find answers to any graph question. Graph rendering is done with the 'Cytoscape.js' library, which allows Graphlytic to render tens of thousands of nodes and hundreds of thousands of relationships at once.
+
+Graphlytic is compatible with Azure Cosmos DB and can be deployed to Azure in minutes. GraphlyticΓÇÖs UI can be customized and extended in many ways, for instance the default [visualization configuration](https://graphlytic.biz/doc/latest/Visualization_settings.html), [data schema](https://graphlytic.biz/doc/latest/Data_schema.html), [style mappings](https://graphlytic.biz/doc/latest/Style_mappers.html), [virtual properties](https://graphlytic.biz/doc/latest/Virtual_properties.html) in the visualization, or custom implemented [widgets](https://graphlytic.biz/doc/latest/Widgets.html) that can enhance the visualization features with bespoke reports or integrations.
+
+The following are two example scenarios:
+
+* **IT Management use case**
+Companies running their IT operations on their own infrastructure, Telco, or IP providers, all need a solid network documentation and a functional configuration management. Impact analyses describing interdependencies among network elements (active and passive) are being developed to overcome blackouts, which cause significant financial losses, or even single outages causing no or low availability of service. Bottlenecks and single points of failure are determined and solved. Endpoint as well as route redundancies are being implemented.
+Graphlytic property graph visualization is a perfect enabler for all above mentioned points - network documentation, network configuration management, impact analysis and asset management. It stores and depicts all relevant network configuration information in one place, bringing a completely new added value to IT managers and field technicians.
+
+ :::image type="content" source="./media/graph-visualization-partners/graphlytic/it-management.gif" alt-text="Graphlytic IT Management use case demo" :::
+
+<b>Figure:</b> Graphlytic IT management use case
+
+* **Anti-fraud use case**
+Fraud pattern is a well-known term to every single insurance company, bank or e-commerce enterprise. Modern fraudsters build sophisticated fraud rings and schemes that are hard to unveil with traditional tools. It can cause serious losses if not detected properly and on time. On the other hand, traditional red flag systems with too strict criteria must be adjusted to eliminate false positive indicators, as it would lead to overwhelming fraud indications. Great amounts of time are spent trying to detect complex fraud, paralyzing investigators in their daily tasks.
+The basic idea behind Graphlytic is the fact that the human eye can simply distinguish and find any pattern in a graphical form much easier than in any table or data set. It means that the antifraud analyst can capture fraud schemes within graph visualization more easily, faster and smarter than with solely traditional tools.
+
+ :::image type="content" source="./media/graph-visualization-partners/graphlytic/antifraud.gif" alt-text="Graphlytic Fraud detection use case demo":::
+
+<b>Figure:</b> Graphlytic Fraud detection use case demo
+
+### Useful links
+
+* [Documentation](https://graphlytic.biz/doc/)
+* [Free Online Demo](https://graphlytic.biz/demo)
+* [Blog](https://graphlytic.biz/blog)
+* [REST API documentation](https://graphlytic.biz/doc/latest/REST_API.html)
+* [ETL job drivers & examples](https://graphlytic.biz/doc/latest/ETL_jobs.html)
+* [SMTP Email Server Integration](https://graphlytic.biz/doc/latest/SMTP_Email_Server_Connection.html)
+* [Geo Map Server Integration](https://graphlytic.biz/doc/latest/Geo_Map_Server_Integration.html)
+* [Single Sign-on Configuration](https://graphlytic.biz/doc/latest/Single_sign-on.html)
+
+## yWorks
+
+yWorks specializes in the development of professional software solutions that enable the clear visualization of graphs, diagrams, and networks. yWorks has brought together efficient data structures, complex algorithms, and advanced techniques that provide excellent user interaction on a multitude of target platforms. This allows the user to experience highly versatile and sophisticated diagram visualization in applications across many diverse areas.
+
+Azure Cosmos DB can be queried for data using Gremlin, an efficient graph traversal language. The user can query the database for the stored entities and use the relationships to traverse the connected neighborhood. This approach requires in-depth technical knowledge of the database itself and also the query language Gremlin to explore the stored data. Where as with yWorks visualization you can visually explore the Azure Cosmos DB data, identify significant structures, and get a better understanding of relationships. Besides the visual exploration, you can also interactively edit the stored data by modifying the diagram without any knowledge of the associated query language like Gremlin. This way it provides a high-quality visualization and can analyze large data sets from Azure Cosmos DB data. You can use yFiles to add visualization capabilities to your own applications, dashboards, and reports, or to create new, white-label apps and tools for both in-house and customer facing products.
++
+<b>Figure:</b> yWorks visualization snapshot
+
+With yWorks, you can create meaningful visualizations that help users gain insights into the data quickly and easily. Build interactive user-interfaces that match your company's corporate design and easily connect to existing infrastructure and services. Use highly sophisticated automatic graph layouts to generate clear visualizations of the data hidden in your Azure Cosmos DB account. Efficient implementations of the most important graph analysis algorithms enable the creation of responsive user interfaces that highlight the information the user is interested in or needs to be aware of. Use yFiles to create interactive apps that work on desktops, and mobile devices alike.
+
+Typical use-cases and data models include:
+
+* Social networks, money laundering data, and cash-flow networks, where similar entities are connected to each other
+* Process data where entities are being processed and move from one state to another
+* organizational charts and networks, showing team hierarchies, but also majority ownership dependencies and relationships between companies or customers
+* data lineage information & compliance data can be visualized, reviewed & audited
+* computer networks logs, website logs, customer journey logs
+* knowledge graphs, stored as triplets and in other formats
+* Product Lifecycle Management data
+* Bill of Material lists and Supply Chain data
+
+### Useful links
+
+* [Pricing](https://www.yworks.com/products/yfiles-for-html/pricing)
+* [Visualizing a Microsoft Azure Cosmos DB](https://www.yworks.com/pages/visualizing-a-microsoft-azure-cosmos-db)
+* [yFiles - the diagramming library](https://www.yworks.com/yfiles-overview)
+* [yWorks - Demos](https://www.yworks.com/products/yfiles/demos)
+
+### Next Steps
+
+* [Cosmos DB - Gremlin API Pricing](./how-pricing-works.md)
cosmos-db Graph Visualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-visualization.md
- Title: Visualize your graph data in Azure Cosmos DB Gremlin API
-description: Learn how to integrate Azure Cosmos DB graph data with visualization solutions (Linkurious Enterprise, Cambridge Intelligence).
----- Previously updated : 07/02/2019--
-# Visualize graph data stored in Azure Cosmos DB Gremlin API with data visualization solutions
-
-You can visualize data stored in Azure Cosmos DB Gremlin API by using various data visualization solutions. The following solutions are recommended by the [Apache Tinkerpop community](https://tinkerpop.apache.org/#poweredby) for graph data visualization.
-
-## Linkurious Enterprise
--
-[Linkurious Enterprise](https://linkurio.us/product/) uses graph technology and data visualization to turn complex datasets into interactive visual networks. The platform connects to your data sources and enables investigators to seamlessly navigate across billions of entities and relationships. The result is a new ability to detect suspicious relationships without juggling with queries or tables.
-
-The interactive interface of Linkurious Enterprise offers an easy way to investigate complex data. You can search for specific entities, expand connections to uncover hidden relationships, and apply layouts of your choice to untangle complex networks. Linkurious Enterprise is now compatible with Azure Cosmos DB Gremlin API. It's suitable for end-to-end graph visualization scenarios and supports read and write capabilities from the user interface. You can request a [demo of Linkurious with Azure Cosmos DB](https://linkurio.us/contact/)
--
-## Cambridge Intelligence
--
-[Cambridge IntelligenceΓÇÖs](https://cambridge-intelligence.com/products/) graph visualization toolkits now support Azure Cosmos DB. The following two visualization toolkits are supported by Azure Cosmos DB:
--- [KeyLines for JavaScript developers](https://cambridge-intelligence.com/keylines/)--- [Re-Graph for React developers](https://cambridge-intelligence.com/regraph/)--
-These toolkits let you design high-performance graph visualization and analysis applications for your use case. They harness powerful Web Graphics Library(WebGL) rendering and carefully crafted code to give users a fast and insightful visualization experience. These tools are compatible with any browser, device, server or database, and come with step-by-step tutorials, fully documented APIs, and interactive demos.
---
-## Next steps
--- [Try the toolkits](https://cambridge-intelligence.com/try/)-- [KeyLines technology overview](https://cambridge-intelligence.com/keylines/technology/)-- [Re-Graph technology overview](https://cambridge-intelligence.com/regraph/technology/)-- [Graph visualization use cases](https://cambridge-intelligence.com/use-cases/)
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-dotnet-v3.md
The v3 SDK contains many usability and performance improvements, including:
* Fluent hierarchy that replaces the need for URI factory * Built-in support for change feed processor library * Built-in support for bulk operations
-* Mockabale APIs for easier unit testing
+* Mockable APIs for easier unit testing
* Transactional batch and Blazor support * Pluggable serializers * Scale non-partitioned and autoscale containers
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/online-backup-and-restore.md
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see the [Periodic backup mode](configure-periodic-backup-restore.md) article.
-* **Continuous backup mode** (currently in public preview) ΓÇô You choose this mode while creating the Azure Cosmos DB account. This mode allows you to do restore to any point of time within the last 30 days. To learn more, see the [Introduction to Continuous backup mode](continuous-backup-restore-introduction.md), provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template) articles.
+* **Continuous backup mode** ΓÇô You choose this mode while creating the Azure Cosmos DB account. This mode allows you to do restore to any point of time within the last 30 days. To learn more, see the [Introduction to Continuous backup mode](continuous-backup-restore-introduction.md), provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template) articles.
> [!NOTE] > If you configure a new account with continuous backup, you can do self-service restore via Azure portal, PowerShell, or CLI. If your account is configured in continuous mode, you canΓÇÖt switch it back to periodic mode. Currently existing accounts with periodic backup mode canΓÇÖt be changed into continuous mode.
cosmos-db Sql Api Query Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-query-metrics.md
IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
| `retrievedDocumentCount` | count | Total number of retrieved documents | | `retrievedDocumentSize` | bytes | Total size of retrieved documents in bytes | | `outputDocumentCount` | count | Number of output documents |
-| `writeOutputTimeInMs` | milliseconds | Query execution time in milliseconds |
+| `writeOutputTimeInMs` | milliseconds | Time spent writing the output in milliseconds |
| `indexUtilizationRatio` | ratio (<=1) | Ratio of number of documents matched by the filter to the number of documents loaded | The client SDKs may internally make multiple query operations to serve the query within each partition. The client makes more than one call per-partition if the total results exceed `x-ms-max-item-count`, if the query exceeds the provisioned throughput for the partition, or if the query payload reaches the maximum size per page, or if the query reaches the system allocated timeout limit. Each partial query execution returns a `x-ms-documentdb-query-metrics` for that page.
Here are some sample queries, and how to interpret some of the metrics returned
## Next steps * To learn about the supported SQL query operators and keywords, see [SQL query](sql-query-getting-started.md). * To learn about request units, see [request units](request-units.md).
-* To learn about indexing policy, see [indexing policy](index-policy.md)
+* To learn about indexing policy, see [indexing policy](index-policy.md)
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-visually.md
Previously updated : 06/30/2020 Last updated : 07/30/2021 # Visually monitor Azure Data Factory
After you create the user properties, you can monitor them in the monitoring lis
![Activity runs list with columns for user properties](media/monitor-visually/view-user-properties.png) + ## Rerun pipelines and activities
+
+ Rerun behavior of the container activities is as follows:
+
+- `Wait`- Activity will behave as before.
+- `Set Variable` - Activity will behave as before.
+- `Filter` - Activity will behave as before.
+- `Until` Activity will evaluate the expression and will loop until the condition is satisfied. Inner activities may still be skipped based on the rerun rules.
+- `Foreach` Activity will always loop on the items it receives. Inner activities may still be skipped based on the rerun rules.
+- `If and switch` - Conditions will always be evaluated. Inner activities may still be skipped based on the rerun rules.
+- `Execute pipeline activity` - The child pipeline will be triggered, but all activities in the child pipeline may still be skipped based on the rerun rules.
+ To rerun a pipeline that has previously ran from the start, hover over the specific pipeline run and select **Rerun**. If you select multiple pipelines, you can use the **Rerun** button to run them all.
For a seven-minute introduction and demonstration of this feature, watch the fol
## Next steps
-To learn about monitoring and managing pipelines, see the [Monitor and manage pipelines programmatically](./monitor-programmatically.md) article.
+To learn about monitoring and managing pipelines, see the [Monitor and manage pipelines programmatically](./monitor-programmatically.md) article.
databox-online Azure Stack Edge Gpu Kubernetes Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-kubernetes-overview.md
The following diagram illustrates the implementation of Kubernetes on a 1-node A
For more information on the Kubernetes cluster architecture, go to [Kubernetes core concepts](https://kubernetes.io/docs/concepts/architecture/).
+The master and the worker nodes are virtual machines that consume CPU and memory. When deploying Kubernetes workloads, it is important to understand the compute requirements for the master and worker VMs.
+|Kubernetes VM type|CPU and memory requirement|
+|||
+|Master VM|4 cores, 4-GB RAM|
+|Worker VM|12 cores, 32-GB RAM|
<!--The Kubernetes cluster control plane components make global decisions about the cluster. The control plane has: - *kubeapiserver* that is the front end of the Kubernetes API and exposes the API.
defender-for-iot How To Configure With Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-configure-with-sentinel.md
Last updated 05/26/2021
-# Connect your data from Defender for IoT for device builders to Azure Sentinel
+# Connect your data from Defender for IoT for device builders to Azure Sentinel (Public preview)
Use the Defender for IoT connector to stream all your Defender for IoT events into Azure Sentinel.
defender-for-iot How To Configure With Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-configure-with-sentinel.md
Last updated 06/14/2021
-# Connect your data from Defender for IoT for organizations to Azure Sentinel
+# Connect your data from Defender for IoT for organizations to Azure Sentinel (Public preview)
Use the Defender for IoT connector to stream all your Defender for IoT events into Azure Sentinel.
defender-for-iot Integration Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/integration-servicenow.md
- Title: About the ServiceNow integration
-description: The Defender for IoT ICS Management application for ServiceNow provides SOC analysts with multidimensional visibility into the specialized OT protocols and IoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
Previously updated : 1/17/2021---
-# The Defender for IoT ICS management application for ServiceNow
-
-The Defender for IoT ICS Management application for ServiceNow provides SOC analysts with multidimensional visibility into the specialized OT protocols and IoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. This is an important evolution given the ongoing convergence of IT and OT to support new IoT initiatives, such as smart machines and real-time intelligence.
-
-The application also enables both IT and OT incident response from within one corporate SOC.
-
-## About Defender for IoT
-
-Defender for IoT delivers the only ICS and IoT cybersecurity platform built by blue-team experts with a track record defending critical national infrastructure, and the only platform with patented ICS-aware threat analytics and machine learning. Defender for IoT provides:
--- Immediate insights about ICS the device landscape with an extensive range of details about attributes.--- ICS-aware deep embedded knowledge of OT protocols, devices, applications, and their behaviors.--- Immediate insights into vulnerabilities, and known zero-day threats.--- An automated ICS threat modeling technology to predict the most likely paths of targeted ICS attacks via proprietary analytics.-
-> [!Note]
-> References to CyberX refer to Azure Defender for IoT.
-
-## About the integration
-
-The Defender for IoT integration with ServiceNow provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS & IoT devices.
-
-### Threat management
-
-The Defender for IoT ICS Management application helps:
--- Reduce the time required for industrial and critical infrastructure organizations to detect, investigate, and act on cyber threats.--- Obtain real-time intelligence about OT risks.--- Correlate Defender for IoT alerts with ServiceNow threat monitoring and incident management workflows.--- Trigger ServiceNow tickets and workflows with other services and applications on the ServiceNow platform.-
-ICS and SCADA security threats are identified by Defender for IoT security engines, which provide immediate alert response to malware attacks, network, and security policy deviations, as well as operational and protocol anomalies. For details about alert details sent to ServiceNow, see [Alert reporting](#alert-reporting).
-
-### Device visibility and management
-
-The ServiceNow Configuration Management Database (CMDB) is enriched and supplemented with a rich set of device attributes pushed by the Defender for IoT platform. This ensures comprehensive and continuous visibility into the device landscape and lets you monitor and respond from a single-pane-of-glass. For details about device attributes sent to ServiceNowSee, see [View Defender for IoT detections in ServiceNow](#view-defender-for-iot-detections-in-servicenow).
-
-## System requirements and architecture
-
-This article describes:
--- **Software Requirements** -- **Architecture**-
-## Software requirements
--- ServiceNow Service Management version 3.0.2--- Defender for IoT patch 2.8.11.1 or above-
-> [!Note]
-> If you are already working with a Defender for IoT and ServiceNow integration, and upgrade using the on-premises management console, pervious data received from Defender for IoT sensors should be cleared from ServiceNow.
-
-## Architecture
-
-### On-premises management console architecture
-
-The on-premises management console provides a unified source for all the device and alert information sent to ServiceNow.
-
-You can set up an on-premises management console to communicate with one instance of ServiceNow. The on-premises management console pushes sensor data to the Defender for IoT application using REST API.
-
-If you are setting up your system to work with an on-premises management console, disable the ServiceNow Sync, Forwarding Rules and proxy configurations in sensors, if they were set up.
-
-These configurations should be set up for the on-premises management console. Configuration instructions are described in this article.
-
-### Sensor architecture
-
-If you want to set up your environment to include direct communication between sensors and ServiceNow, for each sensor define the ServiceNow Sync, Forwarding rules, and proxy configuration (if a proxy is needed).
-
-It recommended setting up your integration using the on-premises management console to communicate with ServiceNow.
-
-## Create access tokens in ServiceNow
-
-This article describes how to create an access token in ServiceNow. The token is needed to communicate with Defender for IoT.
-
-You will need the **Client ID** and **Client Secret** when creating Defender for IoT Forwarding rules, which forward alert information to ServiceNow, and when configuring Defender for IoT to push device attributes to ServiceNow tables.
-
-## Set up Defender for IoT to communicate with ServiceNow
-
-This article describes how to set up Defender for IoT to communicate with ServiceNow.
-
-### Send Defender for IoT alert information
-
-This article describes how to configure Defender for IoT to push alert information to ServiceNow tables. For information about alert data sent, see [Alert reporting](#alert-reporting).
-
-Defenders for IoT alerts appear in ServiceNow as security incidents.
-
-Define a Defender for IoT *Forwarding* rule to send alert information to ServiceNow.
-
-To define the rule:
-
-1. In the Defender for IoT left pane, select **Forwarding**.
-
-1. Select the :::image type="content" source="media/integration-servicenow/plus.png" alt-text="The plus icon button."::: icon. The Create Forwarding Rule dialog box opens.
-
- :::image type="content" source="media/integration-servicenow/forwarding-rule.png" alt-text="Create Forwarding Rule":::
-
-1. Add a rule name.
-
-1. Define criteria under which Defender for IoT will trigger the forwarding rule. Working with Forwarding rule criteria helps pinpoint and manage the volume of information sent from Defender for IoT to ServiceNow. The following options are available:
-
- - **Severity levels:** This is the minimum-security level incident to forward. For example, if **Minor** is selected, minor alerts, and any alert above this severity level will be forwarded. Levels are pre-defined by Defender for IoT.
-
- - **Protocols:** Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
-
- - **Engines:** Select the required engines or choose them all. Alerts from selected engines will be sent.
-
-1. Verify that **Report Alert Notifications** is selected.
-
-1. In the Actions section, select **Add** and then select **ServiceNow**.
-
- :::image type="content" source="media/integration-servicenow/select-servicenow.png" alt-text="Select ServiceNow from the dropdown options.":::
-
-1. Enter the ServiceNow action parameters:
-
- :::image type="content" source="media/integration-servicenow/parameters.png" alt-text="Fill in the ServiceNow action parameters":::
-
-1. In the **Actions** pane, set the following parameters:
-
- | Parameter | Description |
- |--|--|
- | Domain | Enter the ServiceNow server IP address. |
- | Username | Enter the ServiceNow server username. |
- | Password | Enter the ServiceNow server password. |
- | Client ID | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. |
- | Client Secret | Enter the client secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. |
- | Report Type | **Incidents**: Forward a list of alerts that are presented in ServiceNow with an incident ID and short description of each alert.<br /><br />**Defender for IoT Application**: Forward full alert information, including the sensor details, the engine, the source, and destination addresses. The information is forwarded to the Defender for IoT on the ServiceNow application. |
-
-1. Select **SAVE**. Defenders for IoT alerts appear as incidents in ServiceNow.
-
-### Send Defender for IoT device attributes
-
-This article describes how to configure Defender for IoT to push an extensive range of device attributes to ServiceNow tables. See ***Inventory Information*** for details about the kind of information pushed to ServiceNow.
-
-To send attributes to ServiceNow, you must map your on-premises management console to a ServiceNow instance. This ensures that the Defender for IoT platform can communicate and authenticate with the instance.
-
-To add a ServiceNow instance:
-
-1. Sign in to your Defender for IoT on-premises management console.
-
-1. Select **System Settings** and then **ServiceNow** from the on-premises management console Integration section.
-
- :::image type="content" source="media/integration-servicenow/servicenow.png" alt-text="Select the ServiceNow button.":::
-
-1. Enter the following sync parameters in the ServiceNow Sync dialog box.
-
- :::image type="content" source="media/integration-servicenow/sync.png" alt-text="The ServiceNow sync dialog box.":::
-
- Parameter | Description |
- |--|--|
- | Enable Sync | Enable and disable the sync after defining parameters. |
- | Sync Frequency (minutes) | By default, information is pushed to ServiceNow every 60 minutes. The minimum is 5 minutes. |
- | ServiceNow Instance | Enter the ServiceNow instance URL. |
- | Client ID | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. |
- | Client Secret | Enter the Client Secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. |
- | Username | Enter the username for this instance. |
- | Password | Enter the password for this instance. |
-
-1. Select **SAVE**.
-
-## Verify communication
-
-Verify that the on-premises management console is connected to the ServiceNow instance by reviewing the *Last Sync* date.
--
-## Set up the integrations using the HTTPS proxy
-
-When setting up the Defender for IoT and ServiceNow integration, the on-premises management console and the ServiceNow server communicate using port 443. If the ServiceNow server is behind the proxy, the default port cannot be used.
-
-Defender for IoT supports an HTTPS proxy in the ServiceNow integration by enabling the change of the default port used for integration.
-
-To configure the proxy:
-
-1. Edit global properties in on-premises management console:
- `sudo vim /var/cyberx/properties/global.properties`
-
-2. Add the following parameters:
-
- - `servicenow.http_proxy.enabled=1`
-
- - `servicenow.http_proxy.ip=1.179.148.9`
-
- - `servicenow.http_proxy.port=59125`
-
-3. Save and exit.
-
-4. Run the following command: `sudo monit restart all`
-
-After configuration, all the ServiceNow data is forwarded using the configured proxy.
-
-## Download the Defender for IoT application
-
-This article describes how to download the application.
-
-To access the Defender for IoT application:
-
-1. Navigate to <https://store.servicenow.com/>
-
-2. Search for `Defender for IoT` or `CyberX IoT/ICS Management`.
-
- :::image type="content" source="media/integration-servicenow/search-results.png" alt-text="Search for CyberX in the search bar.":::
-
-3. Select the application.
-
- :::image type="content" source="media/integration-servicenow/cyberx-app.png" alt-text="Select the application from the list.":::
-
-4. Select **Request App.**
-
- :::image type="content" source="media/integration-servicenow/sign-in.png" alt-text="Sign in to the application with your credentials.":::
-
-5. Sign in and download the application.
-
-## View Defender for IoT detections in ServiceNow
-
-This article describes the device attributes and alert information presented in ServiceNow.
-
-To view device attributes:
-
-1. Sign in to ServiceNow.
-
-2. Navigate to **CyberX Platform**.
-
-3. Navigate to **Inventory** or **Alert**.
-
- [:::image type="content" source="media/integration-servicenow/alert-list.png" alt-text="Inventory or Alert":::](media/integration-servicenow/alert-list.png#lightbox)
-
-## Inventory information
-
-The Configuration Management Database (CMDB) is enriched and supplemented by data sent by Defender for IoT to ServiceNow. By adding or updating of device attributes on ServiceNowΓÇÖs CMDB configuration item tables, Defender for IoT can trigger the ServiceNow workflows and business rules.
-
-The following information is available:
--- Device attributes, for example the device MAC, OS, vendor, or protocol detected.--- Firmware information, for example the firmware version and serial number.--- Connected device information, for example the direction of the traffic between the source and destination.-
-### Devices attributes
-
-This article describes the device attributes pushed to ServiceNow.
-
-| Item | Description |
-|--|--|
-| Appliance | The name of the sensor that detected the traffic. |
-| ID | The device ID assigned by Defender for IoT. |
-| Name | The device name. |
-| IP Address | The device IP address or addresses. |
-| Type | The device type, for example a switch, PLC, historian, or Domain Controller. |
-| MAC Address | The device MAC address or addresses. |
-| Operating System | The device operating system. |
-| Vendor | The device vendor. |
-| Protocols | The protocols detected in the traffic generated by the device. |
-| Owner | Enter the name of the device owner. |
-| Location | Enter the physical location of the device. |
-
-View devices connected to a device in this view.
-
-To view connected devices:
-
-1. Select a device and then select the **Appliance** listed in for that device.
-
- :::image type="content" source="media/integration-servicenow/appliance.png" alt-text="Select the desired appliance from the list.":::
-
-1. In the **Device Details** dialog box, select **Connected Devices**.
-
-### Firmware details
-
-This article describes the device firmware information pushed to ServiceNow.
-
-| Item | Description |
-|--|--|
-| Appliance | The name of the sensor that detected the traffic. |
-| Device | The device name. |
-| Address | The device IP address. |
-| Module Address | The device model and slot number or ID. |
-| Serial | The device serial number. |
-| Model | The device model number. |
-| Version | The firmware version number. |
-| Additional Data | More data about the firmware as defined by the vendor, for example the device type. |
-
-### Connection details
-
-This article describes the device connection information pushed to ServiceNow.
--
-| Item | Description |
-|--|--|
-| Appliance | The name of the sensor that detected the traffic. |
-| Direction | The direction of the traffic. <br /> <br /> - **One Way** indicates that the Destination is the server and Source is the client. <br /> <br /> - **Two Way** indicates that both the source and the destination are servers, or that the client is unknown. |
-| Source device ID | The IP address of the device that communicated with the connected device. |
-| Source device name | The name of the device that communicated with the connected device. |
-| Destination device ID | The IP address of the connected device. |
-| Destination device name | The name of the connected device. |
-
-## Alert reporting
-
-Alerts are triggered when Defenders for IoT engines detect changes in network traffic and behavior that require your attention. For details on the kinds of alerts each engine generates, see [About alert engines](#about-alert-engines).
-
-This article describes the device alert information pushed to ServiceNow.
-
-| Item | Description |
-|--|--|
-| Created | The time and date the alert was generated. |
-| Engine | The engine that detected the event. |
-| Title | The alert title. |
-| Description | The alert description. |
-| Protocol | The protocol detected in the traffic. |
-| Severity | The alert severity defined by Defender for IoT. |
-| Appliance | The name of the sensor that detected the traffic. |
-| Source name | The source name. |
-| Source IP address| The source IP address. |
-| Destination name | The destination name. |
-| Destination IP address | The destination IP address. |
-| Assignee | Enter the name of the individual assigned to the ticket. |
-
-### Updating alert information
-
-Select the entry in the created column to view alert information in a form. You can update alert details and assign the alert to an individual to review and handle.
-
-[:::image type="content" source="media/integration-servicenow/alert.png" alt-text="View the alert's information.":::](media/integration-servicenow/alert.png#lightbox)
-
-### About alert engines
-
-This article describes the kind of alerts each engine triggers.
-
-| Alert type | Description |
-|--|--|
-| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /><br />- A new device is detected. <br /><br />- A new configuration is detected on a device. <br /><br />- A device not defined as a programming device carries out a programming change. <br /><br />- A firmware version changed. |
-| Protocol violation alerts | Triggered when the Protocol Violation engine detects a packet structures or field values that don't comply with the protocol specification. |
-| Operational alerts | Triggered when the Operational engine detects network operational incidents or device malfunctioning. For example, a network device was stopped using a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
-| Malware alerts | Triggered when the Malware engine detects malicious network activity, for example, known attacks such as Conficker. |
-| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scanning but is not defined as a scanning device. |
-
-## Next steps
-
-Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | 10G, 100G | | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | 10G, 100G | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| 10G, 100G | CDC, Equinix |
-| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco |
+| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom |
| **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea | | **Chennai2** | Airtel | 2 | South India | 10G | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
The following table shows connectivity locations and the service providers for e
| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin |
-| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco |
+| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom |
| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported |Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 | | **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney |
+| **Vodacom** |Supported |Supported |Cape Town, Johannesburg|
| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported |Amsterdam2, London, Singapore | | **[Vodafone Idea](https://www.vodafone.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Mumbai2 | | **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported |Amsterdam, Chicago, Dallas, Denver, London, Los Angeles, Montreal, New York, Paris, Phoenix, Seattle, Silicon Valley, Toronto, Washington DC, Washington DC2 |
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-certificates.md
Previously updated : 07/15/2021 Last updated : 08/02/2021 # Azure Firewall Premium certificates --
- To properly configure Azure Firewall Premium TLS inspection, you must provide a valid intermediate CA certificate and deposit it in Azure Key vault.
+To properly configure Azure Firewall Premium TLS inspection, you must provide a valid intermediate CA certificate and deposit it in Azure Key vault.
## Certificates used by Azure Firewall Premium
Write-Host "================"
```
+## Certificate auto-generation (preview)
+
+For non-production deployments, you can use the Azure Firewall Premium Certification Auto-Generation mechanism, which automatically creates the following three resources for you:
+
+- Managed Identity
+- Key Vault
+- Self-signed Root CA certificate
+
+Just choose the new preview managed identity, and it ties the three resources together in your Premium policy and sets up TLS inspection.
++ ## Troubleshooting If your CA certificate is valid, but you canΓÇÖt access FQDNs or URLs under TLS inspection, check the following items:
hdinsight Hdinsight 36 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-36-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight 3.6
description: Learn about the Apache Hadoop components and versions in Azure HDInsight 3.6. -- Last updated 02/08/2021
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-40-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight 4.0
description: Learn about the Apache Hadoop components and versions in Azure HDInsight 4.0. -- Last updated 02/08/2021
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight
description: Learn about the Apache Hadoop components and versions in Azure HDInsight. -- Last updated 02/08/2021
hdinsight Hdinsight Overview Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-overview-versioning.md
Title: Versioning introduction - Azure HDInsight
description: Learn how versioning works in Azure HDInsight. -- Last updated 02/08/2021
hdinsight Hdinsight Retired Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-retired-versions.md
Title: Azure HDInsight retired versions
description: Learn about retired versions in Azure HDInsight. -- Last updated 02/08/2021
hdinsight Hdinsight Rotate Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-rotate-storage-keys.md
Title: Update Azure Storage account access key in Azure HDInsight
description: Learn how to update Azure Storage account access key in Azure HDInsight cluster. -- Last updated 06/29/2021
hdinsight Hdinsight Using Spark Query Hbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-using-spark-query-hbase.md
As an example, the following table lists two versions and the corresponding comm
|Spark version| HDI HBase version | SHC version | Command | | :--:| :-: | :--: |:-- | | 2.1 | HDI 3.6 (HBase 1.1) | 1.1.1-2.1-s_2.11 | `spark-shell --packages com.hortonworks:shc-core:1.1.1-2.1-s_2.11 --repositories https://repo.hortonworks.com/content/groups/public/` |
- | 2.4 | HDI 4.0 (HBase 2.0) | 1.1.0.3.1.2.2-1 | `spark-shell --packages com.hortonworks.shc:shc-core:1.1.0.3.1.2.2-1 --repositories http://repo.hortonworks.com/content/groups/public/` |
+
2. Keep this Spark shell instance open and continue to [Define a catalog and query](#define-a-catalog-and-query). If you don't find the jars that correspond to your versions in the SHC Core repository, continue reading.
-You can build the jars directly from the [spark-hbase-connector](https://github.com/hortonworks-spark/shc) GitHub branch. For example, if you are running with Spark 2.3 and HBase 1.1, complete these steps:
+For subsequent combinations of Spark and HBase versions, these artifacts are no longer published at above repo. You can build the jars directly from the [spark-hbase-connector](https://github.com/hortonworks-spark/shc) GitHub branch. For example, if you are running with Spark 2.4 and HBase 2.1, complete these steps:
1. Clone the repo:
You can build the jars directly from the [spark-hbase-connector](https://github.
git clone https://github.com/hortonworks-spark/shc ```
-2. Go to branch-2.3:
+2. Go to branch-2.4:
```bash
- git checkout branch-2.3
+ git checkout branch-2.4
``` 3. Build from the branch (creates a .jar file):
You can build the jars directly from the [spark-hbase-connector](https://github.
3. Run the following command (be sure to change the .jar name that corresponds to the .jar file you built): ```bash
- spark-shell --jars <path to your jar>,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar
+ spark-shell --jars <path to your jar>,/usr/hdp/current/hbase-client/lib/shaded-clients/*
``` 4. Keep this Spark shell instance open and continue to the next section.
hdinsight Apache Hive Warehouse Connector Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-operations.md
The results of the query are Spark DataFrames, which can be used with Spark libr
## Writing out Spark DataFrames to Hive tables
-Spark doesn't natively support writing to Hive's managed ACID tables. However,using HWC, you can write out any DataFrame to a Hive table. You can see this functionality at work in the following example:
+Spark doesn't natively support writing to Hive's managed ACID tables. However, using HWC, you can write out any DataFrame to a Hive table. You can see this functionality at work in the following example:
1. Create a table called `sampletable_colorado` and specify its columns using the following command:
Spark doesn't natively support writing to Hive's managed ACID tables. However,us
hive.createTable("sampletable_colorado").column("clientid","string").column("querytime","string").column("market","string").column("deviceplatform","string").column("devicemake","string").column("devicemodel","string").column("state","string").column("country","string").column("querydwelltime","double").column("sessionid","bigint").column("sessionpagevieworder","bigint").create() ```
-1. Filter the table `hivesampletable` where the column `state` equals `Colorado`. This hive query returns a Spark DataFrame ans sis saved in the Hive table `sampletable_colorado` using the `write` function.
+1. Filter the table `hivesampletable` where the column `state` equals `Colorado`. This hive query returns a Spark DataFrame and result is saved in the Hive table `sampletable_colorado` using the `write` function.
```scala hive.table("hivesampletable").filter("state = 'Colorado'").write.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector").mode("append").option("table","sampletable_colorado").save()
Follow the steps below to ingest data from a Spark stream on localhost port 9999
1. Generate data for the Spark stream that you created, by doing the following steps: 1. Open a second SSH session on the same Spark cluster.
- 1. At the command prompt, type `nc -lk 9999`. This command uses the netcat utility to send data from the command line to the specified port.
+ 1. At the command prompt, type `nc -lk 9999`. This command uses the `netcat` utility to send data from the command line to the specified port.
1. Return to the first SSH session and create a new Hive table to hold the streaming data. At the spark-shell, enter the following command:
Follow the steps below to ingest data from a Spark stream on localhost port 9999
hive.table("stream_table").show() ```
-Use **Ctrl + C** to stop netcat on the second SSH session. Use `:q` to exit spark-shell on the first SSH session.
+Use **Ctrl + C** to stop `netcat` on the second SSH session. Use `:q` to exit spark-shell on the first SSH session.
## Next steps * [HWC integration with Apache Spark and Apache Hive](./apache-hive-warehouse-connector.md) * [Use Interactive Query with HDInsight](./apache-interactive-query-get-started.md) * [HWC integration with Apache Zeppelin](./apache-hive-warehouse-connector-zeppelin.md)
+* [HWC supported APIs](./hive-warehouse-connector-apis.md)
hdinsight Hive Warehouse Connector Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/hive-warehouse-connector-apis.md
+
+ Title: Hive Warehouse Connector APIs in Azure HDInsight
+description: Learn about the different APIs of Hive Warehouse Connector.
++++ Last updated : 07/29/2021++
+# Hive Warehouse Connector APIs in Azure HDInsight
+
+This article lists all the APIs supported by Hive warehouse connector. All the examples shown below are run using spark-shell and hive warehouse connector session.
+
+How to create Hive warehouse connector session:
+
+```scala
+import com.hortonworks.hwc.HiveWarehouseSession
+val hive = HiveWarehouseSession.session(spark).build()
+```
+
+## Prerequisite
+
+Complete the [Hive Warehouse Connector setup](./apache-hive-warehouse-connector.md#hive-warehouse-connector-setup) steps.
++
+## Supported APIs
+
+- Set the database:
+ ```scala
+ hive.setDatabase("<database-name>")
+ ```
+
+- List all databases:
+ ```scala
+ hive.showDatabases()
+ ```
+
+- List all tables in the current database
+ ```scala
+ hive.showTables()
+ ```
+
+- Describe a table
+
+ ```scala
+ // Describes the table <table-name> in the current database
+ hive.describeTable("<table-name>")
+ ```
+
+ ```scala
+ // Describes the table <table-name> in <database-name>
+ hive.describeTable("<database-name>.<table-name>")
+ ```
+
+- Drop a database
+
+ ```scala
+ // ifExists and cascade are boolean variables
+ hive.dropDatabase("<database-name>", ifExists, cascade)
+ ```
+
+- Drop a table in the current database
+
+ ```scala
+ // ifExists and purge are boolean variables
+ hive.dropTable("<table-name>", ifExists, purge)
+ ```
+
+- Create a database
+ ```scala
+ // ifNotExists is boolean variable
+ hive.createDatabase("<database-name>", ifNotExists)
+ ```
+
+- Create a table in current database
+ ```scala
+ // Returns a builder to create table
+ val createTableBuilder = hive.createTable("<table-name>")
+ ```
+
+ Builder for create-table supports only the below operations:
+
+ ```scala
+ // Create only if table does not exists already
+ createTableBuilder = createTableBuilder.ifNotExists()
+ ```
+
+ ```scala
+ // Add columns
+ createTableBuilder = createTableBuilder.column("<column-name>", "<datatype>")
+ ```
+
+ ```scala
+ // Add partition column
+ createTableBuilder = createTableBuilder.partition("<partition-column-name>", "<datatype>")
+ ```
+ ```scala
+ // Add table properties
+ createTableBuilder = createTableBuilder.prop("<key>", "<value>")
+ ```
+ ```scala
+ // Creates a bucketed table,
+ // Parameters are numOfBuckets (integer) followed by column names for bucketing
+ createTableBuilder = createTableBuilder.clusterBy(numOfBuckets, "<column1>", .... , "<columnN>")
+ ```
+
+ ```scala
+ // Creates the table
+ createTableBuilder.create()
+ ```
+
+ > [!NOTE]
+ > This API creates an ORC formatted table at default location. For other features/options or to create table using hive queries, use `executeUpdate` API.
++
+- Read a table
+
+ ```scala
+ // Returns a Dataset<Row> that contains data of <table-name> in the current database
+ hive.table("<table-name>")
+ ```
+
+- Execute DDL commands on HiveServer2
+
+ ```scala
+ // Executes the <hive-query> against HiveServer2
+ // Returns true or false if the query succeeded or failed respectively
+ hive.executeUpdate("<hive-query>")
+ ```
+
+ ```scala
+ // Executes the <hive-query> against HiveServer2
+ // Throws exception, if propagateException is true and query threw excpetion in HiveServer2
+ // Returns true or false if the query succeeded or failed respectively
+ hive.executeUpdate("<hive-query>", propagateException) // propagate exception is boolean value
+ ```
+
+- Execute Hive query and load result in Dataset
+
+ - Executing query via LLAP daemons. **[Recommended]**
+ ```scala
+ // <hive-query> should be a hive query
+ hive.executeQuery("<hive-query>")
+ ```
+ - Executing query through HiveServer2 via JDBC.
+
+ Set `spark.datasource.hive.warehouse.smartExecution` to `false` in spark configs before starting spark session to use this API
+ ```scala
+ hive.execute("<hive-query>")
+ ```
+
+- Close Hive warehouse connector session
+ ```scala
+ // Closes all the open connections and
+ // release resources/locks from HiveServer2
+ hive.close()
+ ```
+
+- Execute Hive Merge query
+
+ This API creates a Hive merge query of below format
+
+ ```
+ MERGE INTO <current-db>.<target-table> AS <targetAlias> USING <source expression/table> AS <sourceAlias>
+ ON <onExpr>
+ WHEN MATCHED [AND <updateExpr>] THEN UPDATE SET <nameValuePair1> ... <nameValuePairN>
+ WHEN MATCHED [AND <deleteExpr>] THEN DELETE
+ WHEN NOT MATCHED [AND <insertExpr>] THEN INSERT VALUES <value1> ... <valueN>
+ ```
+
+ ```scala
+ val mergeBuilder = hive.mergeBuilder() // Returns a builder for merge query
+ ```
+ Builder supports the following operations:
+
+ ```scala
+ mergeBuilder.mergeInto("<taget-table>", "<targetAlias>")
+ ```
+
+ ```scala
+ mergeBuilder.using("<source-expression/table>", "<sourceAlias>")
+ ```
+
+ ```scala
+ mergeBuilder.on("<onExpr>")
+ ```
+
+ ```scala
+ mergeBuilder.whenMatchedThenUpdate("<updateExpr>", "<nameValuePair1>", ... , "<nameValuePairN>")
+ ```
+
+ ```scala
+ mergeBuilder.whenMatchedThenDelete("<deleteExpr>")
+ ```
+
+ ```scala
+ mergeBuilder.whenNotMatchedInsert("<insertExpr>", "<value1>", ... , "<valueN>");
+ ```
+
+ ```scala
+ // Executes the merge query
+ mergeBuilder.merge()
+ ```
+
+- Write a Dataset to Hive Table in batch
+ ```scala
+ df.write.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
+ .option("table", tableName)
+ .mode(SaveMode.Type)
+ .save()
+ ```
+ - TableName should be of form `<db>.<table>` or `<table>`. If no database name is provided, the table will searched/created in the current database
+
+ - SaveMode types are:
+
+ - Append: Appends the dataset to the given table
+
+ - Overwrite: Overwrites the data in the given table with dataset
+
+ - Ignore: Skips write if table already exists, no error thrown
+
+ - ErrorIfExists: Throws error if table already exists
++
+- Write a Dataset to Hive Table using HiveStreaming
+ ```scala
+ df.write.format("com.hortonworks.spark.sql.hive.llap.HiveStreamingDataSource")
+ .option("database", databaseName)
+ .option("table", tableName)
+ .option("metastoreUri", "<HMS_URI>")
+ // .option("metastoreKrbPrincipal", principal), add if executing in ESP cluster
+ .save()
+
+ // To write to static partition
+ df.write.format("com.hortonworks.spark.sql.hive.llap.HiveStreamingDataSource")
+ .option("database", databaseName)
+ .option("table", tableName)
+ .option("partition", partition)
+ .option("metastoreUri", "<HMS URI>")
+ // .option("metastoreKrbPrincipal", principal), add if executing in ESP cluster
+ .save()
+ ```
+ > [!NOTE]
+ > Stream writes always append data.
++
+- Writing a spark stream to a Hive Table
+ ```scala
+ stream.writeStream
+ .format("com.hortonworks.spark.sql.hive.llap.streaming.HiveStreamingDataSource")
+ .option("metastoreUri", "<HMS_URI>")
+ .option("database", databaseName)
+ .option("table", tableName)
+ //.option("partition", partition) , add if inserting data in partition
+ //.option("metastoreKrbPrincipal", principal), add if executing in ESP cluster
+ .start()
+ ```
hdinsight Interactive Query Troubleshoot Migrate 36 To 40 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md
This article provides answers to some of the most common issues that customers f
## Reduce latency when running `DESCRIBE TABLE_NAME` Workaround:
-* Increase maximum number of objects (tables/partitions) that can be retrieved from metastore in one batch. Set it to a large number (default is 300) till satisfactory latency levels are reached. The higher the number, the fewer round trips are needed to the Hive metastore server, but it may also cause higher memory requirement at the client side.
+* Increase maximum number of objects (tables/partitions) that can be retrieved from metastore in one batch. Set it to a large number (default is 300) until satisfactory latency levels are reached. The higher the number, the fewer round trips are needed to the Hive metastore server, but it may also cause higher memory requirement at the client side.
```hive.metastore.batch.retrieve.max=2000``` * Restart Hive and all stale services
Workaround:
Workaround: 1. Connect to the hive metastore database for your cluster.
-2. Backup the `TBLS` and `TABLE_PARAMS` table using the following command:
+2. Take the backup of `TBLS` and `TABLE_PARAMS` tables using the following command:
```sql select * into tbls_bak from tbls; select * into table_params_bak from table_params;
Workaround:
## Create table as select (CTAS) creates a new table with same UUID
-Hive 3.1 (HDInsight 4.0) offers a built-in UDF to generate unique UUIDs. Hive UUID() method generates unique IDs even with CTAS. You can leverage it as follows.
+Hive 3.1 (HDInsight 4.0) offers a built-in UDF to generate unique UUIDs. Hive UUID() method generates unique IDs even with CTAS. You can use it as follows.
```hql create table rhive as select uuid() as UUID
from uuid_test
## Hive job output format differs from HDInsight 3.6
-It is caused by the difference of WebHCat(Templeton) between HDInsight 3.6 and HDInsight 4.0.
+It's caused by the difference of WebHCat(Templeton) between HDInsight 3.6 and HDInsight 4.0.
* Hive Rest API - add ```arg=--showHeader=false -d arg=--outputformat=tsv2 -d```
It is caused by the difference of WebHCat(Templeton) between HDInsight 3.6 and H
2. If ```hive.metastore.event.listeners``` has a value, remove it.
-3. DbNotificationListener is needed only if you use REPL commands and if not, it is safe to remove it.
+3. DbNotificationListener is needed only if you use REPL commands and if not, it's safe to remove it.
:::image type="content" source="./media/apache-hive-40-migration-guide/hive-reduce-internal-table-creation-latency.png" alt-text="Reduce internal table latency in HDInsight 4.0" border="true"::: ## Change Hive default table location
-This is a by-design behavior change on HDInsight 4.0 (Hive 3.1). The major reason of this change is for file permission control purposes.
+This behavior change is by-design on HDInsight 4.0 (Hive 3.1). The major reason of this change is for file permission control purposes.
To create external tables under a custom location, specify the location in the create table statement. ## Disable ACID in HDInsight 4.0
-We recommend enabling ACID in HDInsight 4.0 as most of the recent enhancements (both functional and performance) in Hive are made available only for ACID tables.
+We recommend enabling ACID in HDInsight 4.0. Most of the recent enhancements, both functional and performance, in Hive are made available only for ACID tables.
Steps to disable ACID on HDInsight 4.0: 1. Change the following hive configurations in Ambari:
Steps to disable ACID on HDInsight 4.0:
> [!IMPORTANT] > Microsoft recommends against sharing the same data/storage with HDInsight 3.6 and HDInsight 4.0 Hive-managed tables.It is an unsupported scenario.
-* Normally, above configurations should be set even before creating any Hive tables on HDInsight 4.0 cluster. We shouldn't disable ACID once managed tables are created. It would potentially cause data loss or inconsistent results. So, it is recommended to set it once when you create a new cluster and donΓÇÖt change it later.
+* Normally, above configurations should be set even before creating any Hive tables on HDInsight 4.0 cluster. We shouldn't disable ACID once managed tables are created. It would potentially cause data loss or inconsistent results. So, it's recommended to set it once when you create a new cluster and donΓÇÖt change it later.
-* Disabling ACID after creating tables is risky, however in case you want to do it, please follow the below steps to avoid potential data loss or inconsistency:
+* Disabling ACID after creating tables is risky, however in case you want to do it, follow the below steps to avoid potential data loss or inconsistency:
1. Create an external table with same schema and copy the data from original managed table using CTAS command ```create external table e_t1 select * from m_t1```.
- 2. Drop managed table using ```drop table m_t1```.
+ 2. Drop the managed table using ```drop table m_t1```.
3. Disable ACID using the configs suggested. 4. Create m_t1 again and copy data from external table using CTAS command ```create table m_t1 select * from e_t1```. 5. Drop external table using ```drop table e_t1```.
This issue can be resolved by either of the following two options:
2. Change the ΓÇ£Hive Authorization ManagerΓÇ¥ from ```org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider``` to ```org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly```.
-MetaStoreAuthzAPIAuthorizerEmbedOnly effectively disables security checks because the Hive metastore is not embedded in HDInsight 4.0. However, this may bring other potential issues. Please exercise caution when using this option.
+MetaStoreAuthzAPIAuthorizerEmbedOnly effectively disables security checks because the Hive metastore isn't embedded in HDInsight 4.0. However, it may bring other potential issues. Exercise caution when using this option.
## Permission errors in Hive job after upgrading to HDInsight 4.0
MetaStoreAuthzAPIAuthorizerEmbedOnly effectively disables security checks becaus
:::image type="content" source="./media/apache-hive-40-migration-guide/hive-job-permission-errors.png" alt-text="Set authorization to MetaStoreAuthzAPIAuthorizerEmbedOnly" border="true":::
+## Unable to query table with OpenCSVSerde
+
+Reading data from `csv` format table may throw exception like:
+```text
+MetaException(message:java.lang.UnsupportedOperationException: Storage schema reading not supported)
+```
+
+Workaround:
+
+* Add configuration `metastore.storage.schema.reader.impl`=`org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader` in `Custom hive-site` via Ambari UI
+
+* Restart all stale hive services
+ ## Next steps [!INCLUDE [troubleshooting next steps](../includes/hdinsight-troubleshooting-next-steps.md)]
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
The product integrations you can use with the built-in Event Hub-compatible endp
* [Apache Spark integration](../hdinsight/spark/apache-spark-ipython-notebook-machine-learning.md). * [Azure Databricks](/azure/azure-databricks/).
+## Use AMQP-WS or a proxy with Event Hubs SDKs
+
+You can use the Event Hubs SDKs to read from the built-in endpoint in environments where AMQP over WebSockets or reading through a proxy is required. For more information, see the following samples.
+
+| Language | Sample |
+| -- | |
+| .NET | [ReadD2cMessages .NET](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/iot-hub/Quickstarts/ReadD2cMessages) |
+| Java | [read-d2c-messages Java](https://github.com/Azure-Samples/azure-iot-samples-java/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
+| Node.js | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
+| Python | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
+ ## Next steps * For more information about IoT Hub endpoints, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
In this article, learn about Azure Machine Learning releases. For the full SDK
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2021-08-02
+
+### Azure Machine Learning SDK for Python v1.33.0
+ + **azureml-automl-core**
+ + Improved error handling around XGBoost model retrieval.
+ + Added possibility to convert the predictions from float to integers for forecasting and regression tasks.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-automl-runtime**
+ + Added possibility to convert the predictions from float to integers for forecasting and regression tasks.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-contrib-automl-pipeline-steps**
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Add Tabular dataset support for inferencing
+ + Custom path can be specified for the inference data
+ + **azureml-contrib-reinforcementlearning**
+ + Some properties in `azureml.core.environment.DockerSection` are deprecated, such as `shm_size` property used by Ray workers in reinforcement learning jobs. This property can now be specified in `azureml.contrib.train.rl.WorkerConfiguration` instead.
+ + **azureml-core**
+ + Fixed a hyperlink in `ScriptRunConfig.distributed_job_config` documentation
+ + Azure Machine Learning compute clusters can now be created in a location different from the location of the workspace. This is useful for maximizing idle capacity allocation and managing quota utilization across different locations without having to create more workspaces just to use quota and create a compute cluster in a particular location For more information, please click [here](https://docs.microsoft.com/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python)
+ + Added display_name as a mutable name field of Run object.
+ + Dataset from_files now supports skipping of data extensions for large input data
+ + **azureml-dataprep**
+ + Fixed a bug where to_dask_dataframe would fail because of a race condition.
+ + Dataset from_files now supports skipping of data extensions for large input data
+ + **azureml-defaults**
+ + We are removing the dependency azureml-model-management-sdk==1.0.1b6.post1 from azureml-defaults.
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.19.*
+ + **azureml-pipeline-core**
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + **azureml-train-automl-client**
+ + Switch to using blob store for caching in Automated ML.
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Improved error handling around XGBoost model retrieval.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-train-automl-runtime**
+ + Switch to using blob store for caching in Automated ML.
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
++ ## 2021-07-06 ### Azure Machine Learning SDK for Python v1.32.0
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
You can [create a compute instance](how-to-create-manage-compute-instance.md?tab
You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance as per your needs.
+Compute instance is also a secure training compute target similar to compute clusters but it is single node.
+ ## <a name="contents"></a>Tools and environments > [!IMPORTANT]
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
When created, these compute resources are automatically part of your workspace,
|Capability |Compute cluster |Compute instance | ||||
-|Single- or multi-node cluster | **&check;** | |
+|Single- or multi-node cluster | **&check;** | Single node cluster |
|Autoscales each time you submit a run | **&check;** | | |Automatic cluster management and job scheduling | **&check;** | **&check;** | |Support for both CPU and GPU resources | **&check;** | **&check;** |
machine-learning Reference Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/reference-known-issues.md
description: Get a list of the known issues, workarounds, and troubleshooting fo
--++ Previously updated : 07/19/2021 Last updated : 08/02/2021
Virtual Machine.
## Ubuntu
+### Fix GPU on NVIDIA A100 GPU Chip - Azure NDasrv4 Series
+
+The ND A100 v4 series virtual machine is a new flagship addition to the Azure GPU family, designed for high-end Deep Learning training and tightly-coupled scale-up and scale-out HPC workloads.
+
+Due to different architecture it requires different setup for your high-demanding workloads to benefit from GPU acceleration using TensorFlow or PyTorch frameworks.
+
+We are working towards supporting the ND A100 machines GPUs out-of-the-box. Meanwhile you can make your GPU working by adding NVIDIA's Fabric Manager and updating drivers.
+
+Follow these simple steps while in Terminal:
+
+1. Add NVIDIA's repository to install/update drivers - step-by-step instructions can be found [here](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html#ubuntu-lts)
+2. [OPTIONAL] You can also update your CUDA drivers (from repository above)
+3. Install NVIDIA's Fabric Manager drivers:
+
+ ```
+ sudo apt-get install cuda-drivers-460
+ sudo apt-get install cuda-drivers-fabricmanager-460
+ ```
+
+4. Reboot your VM (to get your drivers ready)
+5. Enable and start newly installed NVIDIA Fabric Manager service:
+
+ ```
+ sudo systemctl enable nvidia-fabricmanager
+ sudo systemctl start nvidia-fabricmanager
+ ```
+
+You can now check your drivers and GPU working by running:
+```
+systemctl status nvidia-fabricmanager.service
+```
+
+After which you should see Fabric Manager service running
+![nvidia-fabric-manager-status](./media/nvidia-fabricmanager-status-ok-marked.png)
++ ### Connection to desktop environment fails If you can connect to the DSVM over SSH terminal but not over x2go, you might have set the wrong session type in x2go.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-assign-roles.md
Here are a few things to be aware of while you use Azure role-based access contr
- To deploy your compute resources inside a VNet, you need to explicitly have permissions for the following actions: - `Microsoft.Network/virtualNetworks/*/read` on the VNet resources.
- - `Microsoft.Network/virtualNetworks/subnet/join/action` on the subnet resource.
+ - `Microsoft.Network/virtualNetworks/subnets/join/action` on the subnet resource.
For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
Configure `max_concurrent_iterations` in your `AutoMLConfig` object. If it is n
## Explore models and metrics
+> [!WARNING]
+> The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors.
+ Automated ML offers options for you to monitor and evaluate your training results. * You can view your training results in a widget or inline if you are in a notebook. See [Monitor automated machine learning runs](#monitor) for more details.
best_run, model_from_aml = automl_run.get_output()
print_model(model_from_aml) ```
-> [!NOTE]
-> The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors.
## <a name="monitor"></a> Monitor automated machine learning runs
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-cli.md
If you're using Linux, the fastest way to install the necessary CLI version and
:::code language="bash" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_install_linux":::
-For more, see [Install the Azure CLI for Linux](https://docs.microsoft.com/cli/azure/install-azure-cli-linux).
+For more, see [Install the Azure CLI for Linux](/cli/azure/install-azure-cli-linux).
## Set up
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-mlflow-models.md
The following diagram demonstrates that with the MLflow deploy API and Azure Mac
To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md). In order to deploy to ACI, you don't need to define any deployment configuration, the service will default to an ACI deployment when a config is not provided.-
-> [!NOTE]
-> You can set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method values as a reference if you want to customize your deployment parameters.
-
-```python
-from azureml.core.webservice import AciWebservice, Webservice
-
-# Set the model path to the model folder created by your run
-model_path = "model"
-
-# Configure
-aci_config = AciWebservice.deploy_configuration(cpu_cores=1,
- memory_gb=1,
- tags={'method' : 'sklearn'},
- description='Diabetes model',
- location='eastus2')
-```
- Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
client.create_deployment(model_uri='runs:/{}/{}'.format(run.id, model_path),
name="mlflow-test-aci") ```
+### Customize deployment configuration
+
+If you prefer not to use the defaults, you can set up your deployment configuration with a deployment config json file that uses parameters from the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method as reference.
+
+For your deployment config json file, each of the deployment config parameters need to be defined in the form of a dictionary. The following is an example. [Learn more about what your deployment configuration json file can contain](reference-azure-machine-learning-cli.md#azure-container-instance-deployment-configuration-schema).
+
+```json
+{"computeType": "aci",
+ "containerResourceRequirements": {"cpu": 1, "memoryInGB": 1},
+ "location": "eastus2"
+}
+```
+
+Your json file can then be used to create your deployment.
+
+```python
+# set the deployment config
+deploy_path = "deployment_config.json"
+test_config = {'deploy-config-file': deploy_path}
+
+client.create_deployment(model_uri='runs:/{}/{}'.format(run.id, model_path),
+ config=test_config,
+ name="mlflow-test-aci")
+```
+ ## Deploy to Azure Kubernetes Service (AKS)
print(aks_target.provisioning_errors)
Create a deployment config json using [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aks.aksservicedeploymentconfiguration#parameters) method values as a reference. Each of the deployment config parameters simply need to be defined as a dictionary. Here's an example below: ```json
-{'computeType': 'aks', 'computeTargetName': 'aks-mlflow'}
+{"computeType": "aks", "computeTargetName": "aks-mlflow"}
``` Then, register and deploy the model in one step with MLflow's [deployment client](https://www.mlflow.org/docs/latest/python_api/mlflow.deployments.html).
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
* Your Azure Machine Learning workspace must contain an [Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md).
-* Your Azure Container Registry must have [admin user enabled](https://docs.microsoft.com/azure/container-registry/container-registry-authentication?tabs=azure-cli#admin-account).
+* Your Azure Container Registry must have [admin user enabled](/azure/container-registry/container-registry-authentication?tabs=azure-cli#admin-account).
## Limitations
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
Use the `Environment.get` method to select one of the curated environments:
from azureml.core import Workspace, Environment ws = Workspace.from_config()
-env = Environment.get(workspace=ws, name="AzureML-Minimal")
+env = Environment.get(workspace=ws, name="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu")
``` You can list the curated environments and their packages by using the following code:
for env in envs:
To customize a curated environment, clone and rename the environment. ```python
-env = Environment.get(workspace=ws, name="AzureML-Minimal")
+env = Environment.get(workspace=ws, name="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu")
curated_clone = env.clone("customize_curated") ```
Please note that Python is an implicit dependency in Azure Machine Learning so a
```python # Specify docker steps as a string. dockerfile = r"""
-FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04
RUN echo "Hello from custom container!" """
build = env.build_local(workspace=ws, useDocker=True, pushImageToWorkspaceAcr=Tr
### Utilize adminless Azure Container Registry (ACR) with VNet
-It is no longer required for users to have admin mode enabled on their workspace attached ACR in VNet scenarios. Ensure that the derived image build time on the compute is less than 1 hour to enable successful build. Once the image is pushed to the workspace ACR, this image can now only be accessed with a compute identity. For more information on set up, please see [here](https://docs.microsoft.com/azure/machine-learning/how-to-use-managed-identities).
+It is no longer required for users to have admin mode enabled on their workspace attached ACR in VNet scenarios. Ensure that the derived image build time on the compute is less than 1 hour to enable successful build. Once the image is pushed to the workspace ACR, this image can now only be accessed with a compute identity. For more information on set up, see [How to use managed identities with Azure Machine Learning](/azure/machine-learning/how-to-use-managed-identities).
## Use environments for training
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] ++ * This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
+ > [!NOTE]
+ > Please ensure that you have version **0.9.0** (or higher) of the CLI module `cosmosdb-preview` running in your cloud shell. This is required for all the commands listed below to function properly. You can check extension versions by running `az --version`. If necessary, upgrade using `az extension update --name cosmosdb-preview`.
+ * [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premise environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article. ## <a id="create-account"></a>Configure a hybrid cluster
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
> [!NOTE] > The `assignee` and `role` values in the previous command are fixed service principle and role identifiers respectively.
-1. Next, we will configure resources for our hybrid cluster. Since you already have a cluster, the cluster name here will only be a logical resource to identify the name of your existing cluster. Make sure to use the name of your existing cluster when defining `clusterName` and `clusterNameOverride` variables in the following script. You also need the seed nodes, public client certificates (if you have configured a public/private key on your cassandra endpoint), and gossip certificates of your existing cluster.
+1. Next, we will configure resources for our hybrid cluster. Since you already have a cluster, the cluster name here will only be a logical resource to identify the name of your existing cluster. Make sure to use the name of your existing cluster when defining `clusterName` and `clusterNameOverride` variables in the following script.
+
+ You also need, at minimum, the seed nodes from your existing datacenter, and the gossip certificates required for node-to-node encryption. Azure Managed Instance for Apache Cassandra requires node-to-node encryption for communication between datacenters. If you do not have node-to-node encryption implemented in your existing cluster, you would need to implement it - see documentation [here](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLNodeToNode.html). You should supply the path to the location of the certificates. Each certificate should be in PEM format, e.g. `--BEGIN CERTIFICATE--\n...PEM format 1...\n--END CERTIFICATE--`. In general, there are two ways of implementing certificates:
+
+ 1. Self signed certs. This means a private and public (no CA) certificate for each node - in this case we need all public certificates.
+
+ 1. Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to instructions on [preparing SSL certificates for production](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html)), and all intermediaries (if applicable).
+
+ Optionally, if you have also implemented client-to-node certificates (see [here](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLClientToNode.html)), you also need to provide them in the same format when creating the hybrid cluster. See sample below.
> [!NOTE] > The value of the `delegatedManagementSubnetId` variable you will supply below is exactly the same as the value of `--scope` that you supplied in the command above:
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
--resource-group $resourceGroupName \ --location $location \ --delegated-management-subnet-id $delegatedManagementSubnetId \
- --external-seed-nodes 10.52.221.2,10.52.221.3,10.52.221.4
- --client-certificates 'BEGIN CERTIFICATE--\n...PEM format..\n--END CERTIFICATE--','BEGIN CERTIFICATE--\n...PEM format...\n--END CERTIFICATE--' \
- --external-gossip-certificates 'BEGIN CERTIFICATE--\n...PEM format 1...\n--END CERTIFICATE--','BEGIN CERTIFICATE--\n...PEM format 2...\n--END CERTIFICATE--'
+ --external-seed-nodes 10.52.221.2 10.52.221.3 10.52.221.4 \
+ --external-gossip-certificates /usr/csuser/clouddrive/rootCa.pem /usr/csuser/clouddrive/gossipKeyStore.crt_signed
+ # optional - add your existing datacenter's client-to-node certificates (if implemented):
+ # --client-certificates /usr/csuser/clouddrive/rootCa.pem /usr/csuser/clouddrive/nodeKeyStore.crt_signed
``` > [!NOTE]
- > You should know where your existing public and/or gossip certificates are kept. If you are uncertain, you should be able to run `keytool -list -keystore <keystore-path> -rfc -storepass <password>` to print the certs.
+ > If your cluster already has node-to-node and client-to-node encryption, you should know where your existing client and/or gossip SSL certificates are kept. If you are uncertain, you should be able to run `keytool -list -keystore <keystore-path> -rfc -storepass <password>` to print the certs.
1. After the cluster resource is created, run the following command to get the cluster setup details:
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
--resource-group $resourceGroupName \ ```
-1. The previous command returns information about the managed instance environment. You'll need the gossip certificates so that you can install them on the nodes in your existing datacenter. The following screenshot shows the output of the previous command and the format of certificates:
+1. The previous command returns information about the managed instance environment. You'll need the gossip certificates so that you can install them on the trust store for nodes in your existing datacenter. The following screenshot shows the output of the previous command and the format of certificates:
:::image type="content" source="./media/configure-hybrid-cluster/show-cluster.png" alt-text="Get the certificate details from the cluster." lightbox="./media/configure-hybrid-cluster/show-cluster.png" border="true"::: <!-- ![image](./media/configure-hybrid-cluster/show-cluster.png) -->
+ > [!NOTE]
+ > Note that the certificates returned from the above command contain line breaks represented as text, for example `\r\n`. You should copy each certificate to a file, then format it before attempting to import it into your existing datacenter's trust store. For example:
+ > ```bash
+ > var=$(<cert.txt)
+ > echo -e $var >> cert-formatted.txt
+ > ```
++ 1. Next, create a new datacenter in the hybrid cluster. Make sure to replace the variable values with your cluster details: ```azurecli-interactive
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
--data-center-name $dataCenterName ```
-1. The previous command outputs the new datacenter's seed nodes. Add the new datacenter's seed nodes to your existing datacenter's configuration within the *cassandra.yaml* file. And install the managed instance gossip certificates that you collected earlier:
+1. The previous command outputs the new datacenter's seed nodes. Now add the new datacenter's seed nodes to your existing datacenter's seed node configuration within the *cassandra.yaml* file. And install the managed instance gossip certificates that you collected earlier to the trust store for each node in your existing cluster:
:::image type="content" source="./media/configure-hybrid-cluster/show-datacenter.png" alt-text="Get datacenter details." lightbox="./media/configure-hybrid-cluster/show-datacenter.png" border="true"::: <!-- ![image](./media/configure-hybrid-cluster/show-datacenter.png) -->
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
1. Finally, use the following CQL query to update the replication strategy in each keyspace to include all datacenters across the cluster: ```bash
- ALTER KEYSPACE "ks" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', ΓÇÿon-premise-dc': 3, ΓÇÿmanaged-instance-dc': 3};
+ ALTER KEYSPACE "ks" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'on-premise-dc': 3, 'managed-instance-dc': 3};
``` You also need to update the password tables: ```bash
- ALTER KEYSPACE "system_auth" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', ΓÇÿon-premise-dc': 3, ΓÇÿmanaged-instance-dc': 3}
+ ALTER KEYSPACE "system_auth" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'on-premise-dc': 3, 'managed-instance-dc': 3}
``` ## Troubleshooting
managed-instance-apache-cassandra Network Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/network-rules.md
The required network rules and IP address dependencies are:
| Destination Endpoint | Protocol | Port | Use | |-|-||| |snovap`<region>`.blob.core.windows.net:443</br> Or</br> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Azure Storage | HTTPS | 443 | Required for secure communication between the nodes and Azure Storage for Control Plane communication and configuration.|
+|*.store.core.windows.net:443</br> Or</br> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Azure Storage | HTTPS | 443 | Required for secure communication between the nodes and Azure Storage for Control Plane communication and configuration.|
|*.blob.core.windows.net:443</br> Or</br> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Azure Storage | HTTPS | 443 | Required for secure communication between the nodes and Azure Storage to store backups. *Backup feature is being revised and storage name will follow a pattern by GA*| |vmc-p-`<region>`.vault.azure.net:443</br> Or</br> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) � Azure KeyVault | HTTPS | 443 | Required for secure communication between the nodes and Azure Key Vault. Certificates and keys are used to secure communication inside the cluster.| |management.azure.com:443</br> Or</br> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) � Azure Virtual Machine Scale Sets/Azure Management API | HTTPS | 443 | Required to gather information about and manage Cassandra nodes (for example, reboot)|
marketplace Azure Partner Customer Usage Attribution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-partner-customer-usage-attribution.md
There are secondary use cases for customer usage attribution outside of the comm
Tracking Azure usage from Azure apps published to the commercial marketplace is largely automatic. When you upload a Resource Manager template as part of the [technical configuration of your marketplace Azure app's plan](./azure-app-solution.md#define-the-technical-configuration), Partner Center will add a tracking ID readable by Azure Resource Manager.
+>[!NOTE]
+>To ensure your application's usage is attributed accurately in our systems:
+>1. If you define the tracking ID in the resource type Microsoft.Resources/deployment with a variable, replace the variable with the tracking ID visible in Partner Center on the plan's **Technical Configuration** page (see [Add a GUID to a Resource Manager template](#add-a-guid-to-a-resource-manager-template) below).
+>2. If your Resource Manager template uses resources of type Microsoft.Resources/deployments for purposes other than customer usage attribution, Microsoft will be unable to add a customer usage attribution tracking ID on your behalf. Add a new resource of type Microsoft.Resources/deployments and add the tracking ID visible in Partner Center on the plan's **Technical configuration** page (see [Add a GUID to a Resource Manager template](#add-a-guid-to-a-resource-manager-template) below).
+ If you use Azure Resource Manager APIs, you will need to add your tracking ID per the [instructions below](#use-resource-manager-apis) to pass it to Azure Resource Manager as your code deploys resources. This ID is visible in Partner Center on your plan's Technical Configuration page. > [!NOTE]
No. Customers can track their usage of all resources or resource groups within t
#### Is customer usage attribution similar to the digital partner of record (DPOR) or partner admin link (PAL)?
-Customer usage attribution is a mechanism to associate Azure usage with a partner's repeatable, deployable IP - forming the association at time of deployment. DPOR and PAL are intended to associate a consulting (Systems Integrator) or management (Managed Service Provider) partner with a customer's relevant Azure footprint for the time while the partner is engaged with the customer.
+Customer usage attribution is a mechanism to associate Azure usage with a partner's repeatable, deployable IP - forming the association at time of deployment. DPOR and PAL are intended to associate a consulting (Systems Integrator) or management (Managed Service Provider) partner with a customer's relevant Azure footprint for the time while the partner is engaged with the customer.
media-services Stream Files Dotnet Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-files-dotnet-quickstart.md
By the end of the tutorial you will be able to stream a video.
- Install [Visual Studio Code for Windows/macOS/Linux](https://code.visualstudio.com/) or [Visual Studio 2019 for Windows or Mac](https://visualstudio.microsoft.com/). - Install [.NET 5.0 SDK](https://dotnet.microsoft.com/download) - [Create a Media Services account](./account-create-how-to.md). Be sure to copy the **API Access** details in JSON format or store the values needed to connect to the Media Services account in the *.env* file format used in this sample.-- Follow the steps in [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You'll need to use them to access the API in this sample, or enter them into the *.env* file format.
+- Follow the steps in [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md). Be sure to *save the credentials*. You'll need to use them to access the API in this sample, or enter them into the *.env* file format.
## Download and configure the sample
Clone a GitHub repository that contains the streaming .NET sample to your machin
git clone https://github.com/Azure-Samples/media-services-v3-dotnet-quickstarts.git ```
-The sample is located in the [EncodeAndStreamFiles](https://github.com/Azure-Samples/media-services-v3-dotnet-quickstarts/tree/master/AMSV3Quickstarts/EncodeAndStreamFiles) folder.
+The sample is located in the [EncodeAndStreamFiles](https://github.com/Azure-Samples/media-services-v3-dotnet-quickstarts/tree/master/AMSV3Quickstarts/EncodeAndStreamFiles) folder under AMSV3Quickstarts.
[!INCLUDE [appsettings or .env file](./includes/note-appsettings-or-env-file.md)]
For explanations about what each function in the sample does, examine the code a
When you run the app, URLs that can be used to playback the video using different protocols are displayed.
-1. Press Ctrl+F5 to run the *EncodeAndStreamFiles* application.
-2. Choose the Apple's **HLS** protocol (ends with *manifest(format=m3u8-aapl)*) and copy the streaming URL from the console.
+1. Open AMSV3Quickstarts in VSCode.
+2. Press Ctrl+F5 to run the *EncodeAndStreamFiles* application with .NET. This may take a few minutes.
+3. The app will output three URLs. You will use these URLs to test the stream in the next step.
![Screenshot of the output from the EncodeAndStreamFiles app in Visual Studio showing three streaming URLs for use in the Azure Media Player.](./media/stream-files-tutorial-with-api/output.png)
To test the stream, this article uses Azure Media Player.
2. In the **URL:** box, paste one of the streaming URL values you got when you ran the application. You can paste the URL in HLS, Dash, or Smooth format and Azure Media Player will switch to an appropriate streaming protocol for playback on your device automatically.
-3. Press **Update Player**.
+3. Press **Update Player**. This should start playing the video file in the repository.
Azure Media Player can be used for testing but should not be used in a production environment.
media-services Stream Live Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-tutorial-with-api.md
git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
The live-streaming sample is in the [Live](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Live) folder.
-Open [appsettings.json](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/appsettings.json) in your downloaded project. Replace the values with the credentials that you got from [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md).
-
-Note that you can also use the *.env* file format at the root of the project to set your environment variables only once for all projects in the .NET samples repository. Just copy the *sample.env* file, and then fill out the information that you got from the Media Services **API Access** page in the Azure portal or from the Azure CLI. Rename the *sample.env* file to just *.env* to use it across all projects.
-
-The *.gitignore* file is already configured to prevent publishing this file into your forked repository.
> [!IMPORTANT] > This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple live events in your account.
The *.gitignore* file is already configured to prevent publishing this file into
## Examine the code that performs live streaming
-This section examines functions defined in the [Authentication.cs](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Common_Utils/Authentication.cs) file and [Program.cs](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/Program.cs) file of the *LiveEventWithDVR* project.
+This section examines functions defined in the [Authentication.cs](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Common_Utils/Authentication.cs) file (in the Common_Utils folder) and [Program.cs](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/Program.cs) file of the *LiveEventWithDVR* project.
The sample creates a unique suffix for each resource so that you don't have name collisions if you run the sample multiple times without cleaning up. ### Start using Media Services APIs with the .NET SDK
-To start using Media Services APIs with .NET, you need to create an `AzureMediaServicesClient` object. To create the object, you need to supply credentials for the client to connect to Azure by using Azure Active Directory. Another option is to use interactive authentication, which is implemented in `GetCredentialsInteractiveAuthAsync`.
+Authentication.cs creates a `AzureMediaServicesClient` object using credentials supplied in the local configuration files (appsettings.json or .env).
+
+An `AzureMediaServicesClient` object allows you to start using Media Services APIs with .NET. To create the object, you need to supply credentials for the client to connect to Azure by using Azure Active Directory, which is implemented in `GetCredentailsAsync`. Another option is to use interactive authentication, which is implemented in `GetCredentialsInteractiveAuthAsync`.
[!code-csharp[Main](../../../media-services-v3-dotnet/Common_Utils/Authentication.cs#CreateMediaServicesClientAsync)]
If you're done streaming events and want to clean up the resources provisioned e
## Watch the event
-To watch the event, copy the streaming URL that you got when you ran the code to create a streaming locator. You can use a media player of your choice. [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) is available to test your stream at the [Media Player demo site](https://ampdemo.azureedge.net).
+Press **Ctrl+F5** to run the code. This will output streaming URLs that you can use to watch your live event. Copy the streaming URL that you got to create a streaming locator. You can use a media player of your choice. [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) is available to test your stream at the [Media Player demo site](https://ampdemo.azureedge.net).
A live event automatically converts events to on-demand content when it's stopped. Even after you stop and delete the event, users can stream your archived content as a video on demand for as long as you don't delete the asset. An asset can't be deleted if an event is using it; the event must be deleted first.
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance.md
Title: Troubleshoot Azure Migrate appliance deployment and discovery
-description: Get help with appliance deployment and server discovery.
+ Title: Troubleshoot Azure Migrate appliance
+description: Get help to troubleshoot problems that might occur with the Azure Migrate appliance.
ms.
Last updated 07/01/2020
-# Troubleshoot the Azure Migrate appliance and discovery
+# Troubleshoot the Azure Migrate appliance
This article helps you troubleshoot issues when deploying the [Azure Migrate](migrate-services-overview.md) appliance, and using the appliance to discover on-premises servers.
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-networking.md
Here are some concepts to be familiar with when using virtual networks with Post
* **Network security groups (NSG)** - Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. See [network security group overview](../../virtual-network/network-security-groups-overview.md) documentation for more information.
+ Application security groups make it easy to control Layer-4 security using NSGs for flat networks. You can quickly and easily join/remove virtual machines to/from an application security group and dynamically apply/remove rules to those virtual machines. See [application security group overview](https://docs.microsoft.com/azure/virtual-network/application-security-groups). At this time we do not support Network Security Groups where Application Security Group (ASG) is part of the rule with Azure Database for PostgreSQL - Flexible server. Its currently advised to use [IP based source\destination filtering](https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview#security-rules) in NSG instead.
+ * **Private DNS zone integration** - Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked.
security-center Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/deploy-vulnerability-assessment-vm.md
Previously updated : 05/24/2021 Last updated : 08/02/2021
The vulnerability scanner extension works as follows:
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following IPs to your allow lists (via port 443 - the default for HTTPS): >
- > - 64.39.104.113 - Qualys' US data center
- > - 154.59.121.74 - Qualys' European data center
+ > - https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
+ >
+ > - https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
> > If your machine is in a European Azure region, its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
The Azure Security Center vulnerability assessment extension (powered by Qualys)
During setup, Security Center checks to ensure that the machine can communicate with the following two Qualys data centers (via port 443 - the default for HTTPS): -- 64.39.104.113 - Qualys' US data center-- 154.59.121.74 - Qualys' European data center
+- https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
+- https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
The extension doesn't currently accept any proxy configuration details.
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/encryption-atrest.md
Encryption is the secure encoding of data used to protect confidentiality of dat
- A symmetric encryption key is used to encrypt data as it is written to storage. - The same encryption key is used to decrypt that data as it is readied for use in memory. - Data may be partitioned, and different keys may be used for each partition.-- Keys must be stored in a secure location with identity-based access control and audit policies. Data encryption keys are often encrypted with a key encryption key in Azure Key Vault to further limit access.
+- Keys must be stored in a secure location with identity-based access control and audit policies. Data encryption keys which are stored outside of secure locations are encrypted with a key encryption key kept in a secure location.
In practice, key management and control scenarios, as well as scale and availability assurances, require additional constructs. Microsoft Azure Encryption at Rest concepts and components are described below.
More than one encryption key is used in an encryption at rest implementation. St
- **Data Encryption Key (DEK)** ΓÇô A symmetric AES256 key used to encrypt a partition or block of data. A single resource may have many partitions and many Data Encryption Keys. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When a DEK is replaced with a new key only the data in its associated block must be re-encrypted with the new key. - **Key Encryption Key (KEK)** ΓÇô An encryption key used to encrypt the Data Encryption Keys. Use of a Key Encryption Key that never leaves Key Vault allows the data encryption keys themselves to be encrypted and controlled. The entity that has access to the KEK may be different than the entity that requires the DEK. An entity may broker access to the DEK to limit the access of each DEK to a specific partition. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.
-The Data Encryption Keys, encrypted with the Key Encryption Keys are stored separately and only an entity with access to the Key Encryption Key can decrypt these Data Encryption Keys. Different models of key storage are supported. See [data encryption models](encryption-models.md) for more information.
+Resource providers and application instances store the Data Encryption Keys encrypted with the Key Encryption Keys, often as metadata about the data protected by the Data Encryption Keys. Only an entity with access to the Key Encryption Key can decrypt these Data Encryption Keys. Different models of key storage are supported. See [data encryption models](encryption-models.md) for more information.
## Encryption at rest in Microsoft cloud services
service-bus-messaging Build Message Driven Apps Nservicebus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/build-message-driven-apps-nservicebus.md
+
+ Title: Build message-driven applications with NServiceBus and Azure Service Bus
+description: Learn how to solve complex problems with distributed systems on Azure Service Bus using the NServiceBus framework.
++++ Last updated : 07/26/2021 +++
+# Build message-driven business applications with NServiceBus and Azure Service Bus
+NServiceBus is a commercial messaging framework provided by Particular Software. It's built on top of Azure Service Bus and helps developers focus on business logic by abstracting infrastructure concerns. In this guide, we'll build a solution that exchanges messages between two services. We'll also show how to automatically retry failing messages and review options for hosting these services in Azure.
+
+> [!NOTE]
+> The code for this tutorial is available on the [Particular Software Docs web site](https://docs.particular.net/samples/azure-service-bus-netstandard/send-receive-with-nservicebus/).
+
+## Prerequisites
+
+The sample assumes you've [created an Azure Service Bus namespace](service-bus-create-namespace-portal.md).
+
+> [!IMPORTANT]
+> NServiceBus requires at least the Standard tier. The Basic tier won't work.
+
+## Download and prepare the solution
+1. Download the code from the [Particular Software Docs web site](https://docs.particular.net/samples/azure-service-bus-netstandard/send-receive-with-nservicebus/). The solution `SendReceiveWithNservicebus.sln` consists of three projects:
+
+ - **Sender**: a console application that sends messages
+ - **Receiver**: a console application that receives messages from the sender and replies back
+ - **Shared**: a class library containing the message contracts shared between the sender and receiver
+
+ The following diagram, generated by [ServiceInsight](https://particular.net/serviceinsight), a visualization and debugging tool from Particular Software, shows the message flow:
+
+ :::image type="content" source="./media/nservicebus/sequence-diagram.png" alt-text="Image showing the sequence diagram":::
+1. Open `SendReceiveWithNservicebus.sln` in your favorite code editor (For example, Visual Studio 2019).
+1. Open `appsettings.json` in both the Receiver and Sender projects and set `AzureServiceBusConnectionString` to the connection string for your Azure Service Bus namespace.
+++
+## Define the shared message contracts
+
+The Shared class library is where you define the contracts used to send our messages. It includes a reference to the `NServiceBus` NuGet package, which contains interfaces you can use to identify our messages. The interfaces aren't required, but they give us some extra validation from NServiceBus and allow the code to be self-documenting.
+
+First we'll review the `Ping.cs` class
+
+```csharp
+public class Ping : NServiceBus.ICommand
+{
+ public int Round { get; set; }
+}
+```
+
+The `Ping` class defines a message that the Sender sends to the Receiver. It's a simple C# class that implements `NServiceBus.ICommand`, an interface from the NServiceBus package. This message is a signal to the reader and to NServiceBus that it's a command, although there are other ways to identify messages [without using interfaces](https://docs.particular.net/nservicebus/messaging/conventions).
+
+The other message class in the Shared projects is `Pong.cs`:
+
+```csharp
+public class Pong : NServiceBus.IMessage
+{
+ public string Acknowledgement { get; set; }
+}
+```
+
+`Pong` is also a simple C# object though this one implements `NServiceBus.IMessage`. The `IMessage` interface represents a generic message that is neither a command nor an event, and is commonly used for replies. In our sample, it's a reply that the Receiver sends back to the Sender to indicate that a message was received.
+
+The `Ping` and `Pong` are the two message types you'll use. The next step is to configure the Sender to use Azure Service Bus and to send a `Ping` message.
+
+## Set up the sender
+
+The Sender is an endpoint that sends our `Ping` message. Here, you configure the Sender to use Azure Service Bus as the transport mechanism, then construct a `Ping` instance and send it.
+
+In the `Main` method of `Program.cs`, you configure the Sender endpoint:
+
+```csharp
+var host = Host.CreateDefaultBuilder(args)
+ // Configure a host for the endpoint
+ .ConfigureLogging((context, logging) =>
+ {
+ logging.AddConfiguration(context.Configuration.GetSection("Logging"));
+
+ logging.AddConsole();
+ })
+ .UseConsoleLifetime()
+ .UseNServiceBus(context =>
+ {
+ // Configure the NServiceBus endpoint
+ var endpointConfiguration = new EndpointConfiguration("Sender");
+
+ var transport = endpointConfiguration.UseTransport<AzureServiceBusTransport>();
+ var connectionString = context.Configuration.GetConnectionString("AzureServiceBusConnectionString");
+ transport.ConnectionString(connectionString);
+
+ transport.Routing().RouteToEndpoint(typeof(Ping), "Receiver");
+
+ endpointConfiguration.EnableInstallers();
+ endpointConfiguration.AuditProcessedMessagesTo("audit");
+
+ return endpointConfiguration;
+ })
+ .ConfigureServices(services => services.AddHostedService<SenderWorker>())
+ .Build();
+
+await host.RunAsync();
+```
+
+There's a lot to unpack here so we'll review it step by step.
+
+### Configure a host for the endpoint
+
+Hosting and logging are configured using standard [Microsoft Generic Host options](/dotnet/core/extensions/generic-host). For now, the endpoint is configured to run as a console application but it can be modified to run in Azure Functions with minimal changes, which we'll discuss later in this article.
+
+### Configure the NServiceBus endpoint
+
+Next, you tell the host to use NServiceBus with the `.UseNServiceBus(…)` extension method. The method takes a callback function that returns an endpoint that will be started when the host runs.
+
+In the endpoint configuration, you specify `AzureServiceBus` for our transport, providing a connection string from `appsettings.json`. Next, you'll set up the routing so that messages of type `Ping` are sent to an endpoint named "Receiver". It allows NServiceBus to automate the process of dispatching the message to the destination without requiring the receiver's address.
+
+The call to `EnableInstallers` will set up our topology in the Azure Service Bus namespace when the endpoint is launched, creating the required queues where necessary. In production environments, [operational scripting](https://docs.particular.net/transports/azure-service-bus/operational-scripting) is another option to create the topology.
+
+### Set up background service to send messages
+
+The final piece of the sender is `SenderWorker`, a background service that is configured to send a `Ping` message every second.
+
+```csharp
+public class SenderWorker : BackgroundService
+{
+ private readonly IMessageSession messageSession;
+ private readonly ILogger<SenderWorker> logger;
+
+ public SenderWorker(IMessageSession messageSession, ILogger<SenderWorker> logger)
+ {
+ this.messageSession = messageSession;
+ this.logger = logger;
+ }
+
+ protected override async Task ExecuteAsync(CancellationToken stoppingToken)
+ {
+ try
+ {
+ var round = 0;
+ while (!stoppingToken.IsCancellationRequested)
+ {
+ await messageSession.Send(new Ping { Round = round++ })
+ .ConfigureAwait(false);
+
+ logger.LogInformation($"Message #{round}");
+
+ await Task.Delay(1_000, stoppingToken)
+ .ConfigureAwait(false);
+ }
+ }
+ catch (OperationCanceledException)
+ {
+ // graceful shutdown
+ }
+ }
+}
+```
+
+The `IMessageSession` used in `ExecuteAsync` is injected into `SenderWorker` and allows us to send messages using NServiceBus outside of a message handler. The routing you configured in `Sender` specifies the destination of the `Ping` messages. It keeps the topology of the system (which messages are routed to which addresses) as a separate concern from the business code.
+
+The Sender application also contains a `PongHandler`. You'll get back to it after we've discussed the Receiver, which we'll do next.
+
+## Set up the receiver
+
+The Receiver is an endpoint that listens for a `Ping` message, logs when a message is received, and replies back to the sender. In this section, we'll quickly review the endpoint configuration, which is similar to the Sender, and then turn our attention to the message handler.
+
+Like the sender, set up the receiver as a console application using the Microsoft Generic Host. It uses the same logging and endpoint configuration (with Azure Service Bus as the message transport) but with a different name, to distinguish it from the sender:
+
+```csharp
+var endpointConfiguration = new EndpointConfiguration("Receiver");
+```
+
+Since this endpoint only replies to its originator and doesn't start new conversations, no routing configuration is required. It also doesn't need a background worker like the Sender does, since it only replies when it receives a message.
+
+### The Ping message handler
+
+The Receiver project contains a _message handler_ named `PingHandler`:
+
+```csharp
+public class PingHandler : NServiceBus.IHandleMessages<Ping>
+{
+ private readonly ILogger<PingHandler> logger;
+
+ public PingHandler(ILogger<PingHandler> logger)
+ {
+ this.logger = logger;
+ }
+
+ public async Task Handle(Ping message, IMessageHandlerContext context)
+ {
+ logger.LogInformation($"Processing Ping message #{message.Round}");
+
+ // throw new Exception("BOOM");
+
+ var reply = new Pong { Acknowledgement = $"Ping #{message.Round} processed at {DateTimeOffset.UtcNow:s}" };
+
+ await context.Reply(reply);
+ }
+}
+```
+
+Let's ignore the commented code for now; we'll get back to it later when we talk about recovering from failure.
+
+The class implements `IHandleMessages<Ping>`, which defines one method: `Handle`. This interface tells NServiceBus that when the endpoint receives a message of type `Ping`, it should be processed by the `Handle` method in this handler. The `Handle` method takes the message itself as a parameter, and an `IMessageHandlerContext`, which allows further messaging operations, such as replying, sending commands, or publishing events.
+
+Our `PingHandler` is straightforward: when a `Ping` message is received, log the message details and reply back to the sender with a new `Pong` message.
+
+> [!NOTE]
+> In the Sender's configuration, you specified that `Ping` messages should be routed to the Receiver. NServiceBus adds metadata to the messages indicating, among other things, the origin of the message. This is why you don't need to specify any routing data for the `Pong` reply message; it's automatically routed back to its origin: the Sender.
+>
+
+With the Sender and Receiver both properly configured, you can now run the solution.
+
+## Run the solution
+
+To launch the solution, you need to run both the Sender and the Receiver. If you're using Visual Studio Code, launch the "Debug All" configuration. If you're using Visual Studio, configure the solution to launch both the Sender and Receiver projects:
+
+1. Right-click the solution in Solution Explorer
+1. Select "Set Startup Projects..."
+1. Select **Multiple startup projects**
+1. For both the Sender and the Receiver, select "Start" in the dropdown list
+
+Launch the solution. Two console applications will appear, one for the Sender and one for the Receiver.
+
+# [Sender](#tab/Sender)
++
+# [Receiver](#tab/Receiver)
++++
+In the Sender, notice that a `Ping` message is dispatched every second, thanks to the `SenderWorker` background job. The Receiver displays the details of each `Ping` message it receives and the Sender logs the details of each `Pong` message it receives in reply.
+
+Now that you have everything working, let's break it.
+
+## Resilience in action
+
+Errors are a fact of life in software systems. It's inevitable that code will fail and it can do so for various reasons, such as network failures, database locks, changes in a third-party API, and plain old coding errors.
+
+NServiceBus has robust recoverability features for handling failures. When a message handler fails, messages are automatically retried based on a pre-defined policy. There are two types of retry policy: immediate retries and delayed retries. The best way to describe how they work is to see them in action. Let's add a retry policy to our Receiver endpoint:
+
+1. Open `Program.cs` in the Sender project
+1. After the `.EnableInstallers` line, add the following code:
+
+```csharp
+endpointConfiguration.SendFailedMessagesTo("error");
+var recoverability = endpointConfiguration.Recoverability();
+recoverability.Immediate(
+ immediate =>
+ {
+ immediate.NumberOfRetries(3);
+ });
+recoverability.Delayed(
+ delayed =>
+ {
+ delayed.NumberOfRetries(2);
+ delayed.TimeIncrease(TimeSpan.FromSeconds(5));
+ });
+```
+
+Before we discuss how this policy works, let's see it in action. Before you test the recoverability policy, you need to simulate an error. Open the `PingHandler` code in the Receiver project and uncomment this line:
+
+```csharp
+throw new Exception("BOOM");
+```
+
+Now, when the Receiver handles a `Ping` message, it will fail. Launch the solution again and let's see what happens in the Receiver.
+
+With our less reliable `PingHandler`, all of our messages fail. You can see the retry policy kicking in for those messages. The first time a message fails, it's immediately retried up to three times:
++
+Of course, it will continue to fail so when the three immediate retries are used up, the delayed retry policy kicks in and the message is delayed for 5 seconds:
++
+ After those 5 seconds have passed, the message is retried again another three times (that is, another iteration of the immediate retry policy). These will also fail and NServiceBus will delay the message again, this time for 10 seconds, before trying again.
+
+If `PingHandler` still doesn't succeed after running through the full retry policy, the message is placed in a _centralized_ error queue, named `error`, as defined by the call to `SendFailedMessagesTo`.
++
+The concept of a centralized error queue differs from the dead-lettering mechanism in Azure Service Bus, which has a dead-letter queue for each processing queue. With NServiceBus, the dead-letter queues in Azure Service Bus act as true poison message queues, whereas messages that end up in the centralized error queue can be reprocessed at a later time, if necessary.
+
+The retry policy helps to address [several types of errors](https://particular.net/blog/but-all-my-errors-are-severe) that are often transient or semi-transient in nature. That is, errors that are temporary and often go away if the message is simply reprocessed after a short delay. Examples include network failures, database locks, and third-party API outages.
+
+Once a message is in the error queue, you can examine the message details in the tool of your choice, then decide what to do with it. For example, using [ServicePulse](https://particular.net/servicepulse), a monitoring tool by Particular Software, we can view the message details and the reason for the failure:
++
+After examining the details, you can send the message back to its original queue for processing. You can also edit the message before doing so. If there are multiple messages in the error queue, which failed for the same reason, they can all be sent back to their original destinations as a batch.
+
+Next, it's time to figure out where to deploy our solution in Azure.
+
+## Where to host the services in Azure
+
+In this sample, the Sender and Receiver endpoints are configured to run as console applications. They can also be hosted in various Azure services including Azure Functions, Azure App Services, Azure Container Instances, Azure Kubernetes Services, and Azure VMs. For example, here's how the Sender endpoint can be configured to run as an Azure Function:
+
+```csharp
+[assembly: FunctionsStartup(typeof(Startup))]
+[assembly: NServiceBusEndpointName("Sender")]
+
+public class Startup : FunctionsStartup
+{
+ public override void Configure(IFunctionsHostBuilder builder)
+ {
+ builder.UseNServiceBus(() =>
+ {
+ var configuration = new ServiceBusTriggeredEndpointConfiguration("Sender");
+ var transport = configuration.AdvancedConfiguration.Transport;
+ transport.Routing().RouteToEndpoint(typeof(Ping), "Receiver");
+
+ return configuration;
+ });
+ }
+}
+```
+
+For more information about using NServiceBus with Functions, see [Azure Functions with Azure Service Bus](https://docs.particular.net/nservicebus/hosting/azure-functions/service-bus) in the NServiceBus documentation.
+
+## Next steps
+
+For more information about using NServiceBus with Azure services, see the following articles:
+
+- [Azure Service Bus Send/Reply Sample](https://docs.particular.net/samples/azure-service-bus-netstandard/send-reply/)
+- [Using NServiceBus with Azure Functions](https://docs.particular.net/nservicebus/hosting/azure-functions/service-bus)
+- [Azure Service Bus transport on NServiceBus](https://docs.particular.net/transports/azure-service-bus/)
+- [NServiceBus and Azure](https://docs.particular.net/nservicebus/azure/)
+- [NServiceBus](https://particular.net/nservicebus)
+- [NServiceBus Quick Start Tutorial](https://docs.particular.net/tutorials/quickstart)
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-samples.md
Title: Azure Service Bus messaging samples overview
description: The Service Bus messaging samples demonstrate key features in Azure Service Bus messaging. Provides links to samples on GitHub. Previously updated : 06/18/2021 Last updated : 07/23/2021
static-web-apps Github Actions Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/github-actions-workflow.md
You can take fine-grained control over what commands run during the app or API b
| `app_build_command` | Defines a custom command to build the static content application.<br><br>For example, to configure a production build for an Angular application create an npm script named `build-prod` to run `ng build --prod` and enter `npm run build-prod` as the custom command. If left blank, the workflow tries to run the `npm run build` or `npm run build:azure` commands. | | `api_build_command` | Defines a custom command to build the Azure Functions API application. |
+The following example show how to define custom build commands inside a job's `with` section.
+
+```yml
+...
+with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_MANGO_RIVER_0AFDB141E }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments)
+ action: 'upload'
+ ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
+ app_location: '/' # App source code path
+ api_location: 'api' # Api source code path - optional
+ output_location: 'dist' # Built app content directory - optional
+ app_build_command: 'npm run build-ui-prod'
+ api_build_command: 'npm run build-api-prod'
+ ###### End of Repository/Build Configurations ######
+```
+ > [!NOTE] > Currently, you can only define custom build commands for Node.js builds. The build process always calls `npm install` before any custom command.
storage Blob Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-storage-monitoring-scenarios.md
+
+ Title: Best practices for monitoring Azure Blob Storage
+description: Learn best practice guidelines and how to them when using metrics and logs to monitor your Azure Blob Storage.
+++++ Last updated : 07/30/2021+++
+# Best practices for monitoring Azure Blob Storage
+
+This article features a collection of common storage monitoring scenarios, and provides you with best practice guidelines to accomplish them.
+
+## Identify storage accounts with no or low use
+
+Storage Insights is a dashboard on top of Azure Storage metrics and logs. You can use Storage Insights to examine the transaction volume and used capacity of all your accounts. That information can help you decide which accounts you might want to retire. To configure Storage Insights, see [Monitoring your storage service with Azure Monitor Storage insights](../../azure-monitor/insights/storage-insights-overview.md).
+
+### Analyze transaction volume
+
+From the [Storage Insights view in Azure monitor](../../azure-monitor/insights/storage-insights-overview.md#view-from-azure-monitor), sort your accounts in ascending order by using the **Transactions** column. The following image shows an account with low transaction volume over the specified period.
+
+> [!div class="mx-imgBorder"]
+> ![transaction volume in Storage Insights](./media/blob-storage-monitoring-scenarios/storage-insights-transaction-volume.png)
+
+Click the account link to learn more about these transactions. In this example, most requests are made to the Blob Storage service.
+
+> [!div class="mx-imgBorder"]
+> ![transaction by service type](./media/blob-storage-monitoring-scenarios/storage-insights-transactions-by-storage-type.png)
+
+To determine what sorts of requests are being made, drill into the **Transactions by API name** chart.
+
+> [!div class="mx-imgBorder"]
+> ![Storage transaction APIs](./media/blob-storage-monitoring-scenarios/storage-insights-transaction-apis.png)
+
+In this example, all requests are listing operations or requests for account property information. There are no read and write transactions. This might lead you to believe that the account is not being used in a significant way.
+
+### Analyze used capacity
+
+From the **Capacity** tab of theΓÇ»[Storage Insights view in Azure monitor](../common/storage-insights-overview.md#view-from-azure-monitor), sort your accounts in ascending order by using the **Account used capacity** column. The following image shows an account with lower capacity volume than other accounts.
+
+> [!div class="mx-imgBorder"]
+> ![Used storage capacity](./media/blob-storage-monitoring-scenarios/storage-insights-capacity-used.png)
+
+To examine the blobs associated with this used capacity, you can use Storage Explorer. For large numbers of blobs, consider generating a report by using a [Blob Inventory policy](blob-inventory.md).
+
+## Monitor the use of a container
+
+If you partition your customer's data by container, then can monitor how much capacity is used by each customer. You can use Azure Storage blob inventory to take an inventory of blobs with size information. Then, you can aggregate the size and count at the container level. For an example, see [Calculate blob count and total size per container using Azure Storage inventory](calculate-blob-count-size.md).
+
+You can also evaluate traffic at the container level by querying logs. To learn more about writing Log Analytic queries, see [Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md). To learn more about the storage logs schema, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md#resource-logs-preview).
+
+Here's a query to get the number of read transactions and the number of bytes read on each container.
++
+```kusto
+StorageBlobLogs
+| where OperationName == "GetBlob"
+| extend ContainerName = split(parse_url(Uri).Path, "/")[1]
+| summarize ReadSize = sum(ResponseBodySize), ReadCount = count() by tostring(ContainerName)
+```
+
+The following query uses a similar query to obtain information about write operations.
+
+```kusto
+StorageBlobLogs
+| where OperationName == "PutBlob" or
+ OperationName == "PutBlock" or
+ OperationName == "PutBlockList" or
+ OperationName == "AppendBlock" or
+ OperationName == "SnapshotBlob" or
+ OperationName == "CopyBlob" or
+ OperationName == "SetBlobTier"
+| extend ContainerName = split(parse_url(Uri).Path, "/")[1]
+| summarize WriteSize = sum(RequestBodySize), WriteCount = count() by tostring(ContainerName)
+```
+
+The above query references the names of multiple operations because more than one type of operation can count as a write operation. To learn more about which operations are considered read and write operations, see either [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
+
+## Audit account activity
+
+In many cases, you'll need to audit the activities of your storage accounts for security and compliance. Operations on storage accounts fall into two categories: *Control Plane* and *Data Plane*.
+
+A control plane operation is any Azure Resource Manager request to create a storage account or to update a property of an existing storage account. For more information, see [Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+
+A data plane operation is an operation on the data in a storage account that results from a request to the storage service endpoint. For example, a data plane operation is executed when you upload a blob to a storage account or download a blob from a storage account. For more information, see [Azure Storage API](/rest/api/storageservices/).
+
+The section shows you how to identify the "when", "who", "what" and "how" information of control and data plane operations.
+
+### Auditing control plane operations
+
+Resource Manager operations are captured in the [Azure activity log](../../azure-monitor/essentials/activity-log.md). To view the activity log, open your storage account in the Azure portal, and then select **Activity log**.
+
+> [!div class="mx-imgBorder"]
+> ![Activity Log](./media/blob-storage-monitoring-scenarios/activity-log.png)
++
+Open any log entry to view JSON that describes the activity. The following JSON shows the "when", "what" and "how" information of a control plane operation:
+
+> [!div class="mx-imgBorder"]
+> ![Activity Log JSON](./media/blob-storage-monitoring-scenarios/activity-log-json.png)
+
+The availability of the "who" information depends on the method of authentication that was used to perform the control plane operation. If the authorization was performed by an Azure AD security principal, the object identifier of that security principal would also appear in this JSON output (For example: `"http://schemas.microsoft.com/identity/claims/objectidentifier": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"`). Because you might not always see other identity-related information such as an email address or name, the object identifier is always the best way to uniquely identify the security principal.
+
+You can find the friendly name of that security principal by taking the value of the object identifier, and searching for the security principal in Azure AD page of the Azure portal. The following screenshot shows a search result in Azure AD.
+
+> [!div class="mx-imgBorder"]
+> ![Search Azure Active Directory](./media/blob-storage-monitoring-scenarios/search-azure-active-directory.png)
+
+### Auditing data plane operations
+
+Data plane operations are captured in [Azure resource logs for Storage](monitor-blob-storage.md#analyzing-logs). You can [configure Diagnostic setting](monitor-blob-storage.md#send-logs-to-azure-log-analytics) to export logs to Log Analytics workspace for a native query experience.
+
+Here's a Log Analytics query that retrieves the "when", "who", "what", and "how" information in a list of log entries.
+
+```kusto
+StorageBlobLogs
+| where TimeGenerated > ago(3d)
+| project TimeGenerated, AuthenticationType, RequesterObjectId, OperationName, Uri
+```
+
+For the "when" portion of your audit, the `TimeGenerated` field shows when the log entry was recorded.
+
+For the "what" portion of your audit, the `Uri` field shows the item was modified or read.
+
+For the "how" portion of your audit, the `OperationName` field shows which operation was executed.
+
+For the "who" portion of your audit, `AuthenticationType` shows which type of authentication was used to make a request. This field can show any of the types of authentication that Azure Storage supports including the use of an account key, a SAS token, or Azure Active Directory (Azure AD) authentication.
+
+If a request was authenticated by using Azure AD, the `RequesterObjectId` field provides the most reliable way to identify the security principal. You can find the friendly name of that security principal by taking the value of the `RequesterObjectId` field, and searching for the security principal in Azure AD page of the Azure portal. The following screenshot shows a search result in Azure AD.
+
+> [!div class="mx-imgBorder"]
+> ![Search Azure Active Directory](./media/blob-storage-monitoring-scenarios/search-azure-active-directory.png)
+
+In some cases, a user principal name or *UPN* might appear in logs. For example, if the security principal is an Azure AD user, the UPN will likely appear. For other types of security principals such as user assigned managed identities, or in certain scenarios such as cross Azure AD tenant authentication, the UPN will not appear in logs.
+
+This query shows all read operations performed by OAuth security principals.
+
+```kusto
+StorageBlobLogs
+| where TimeGenerated > ago(3d)
+ and OperationName == "GetBlob"
+ and AuthenticationType == "OAuth"
+| project TimeGenerated, AuthenticationType, RequesterObjectId, OperationName, Uri
+```
+
+Shared Key and SAS authentication provide no means of auditing individual identities. Therefore, if you want to improve your ability to audit based on identity, we recommended that you transition to Azure AD, and prevent shared key and SAS authentication. To learn how to prevent Shared Key and SAS authentication, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=portal). To get started with Azure AD, see [Authorize access to blobs using Azure Active Directory](authorize-access-azure-active-directory.md)
+
+## Optimize cost for infrequent queries
+
+If you maintain large amounts of log data but plan to query them only occasionally (For example, to meet compliance and security obligations), consider archiving your logs to a storage account instead of using Log Analytics. For a massive number of transactions, the cost of using Log Analytics might be high relative to just archiving to storage and using other query techniques. Log Analytics makes sense in cases where you want to use the rich capabilities of Log Analytics. You can reduce the cost of querying data by archiving logs to a storage account, and then querying those logs in a Synapse workspace.
+
+You can export logs to Log Analytics for rich native query capabilities. When you have massive transactions on your storage account, the cost of using logs with Log Analytics might be high. See [Azure Log Analytics Pricing](https://azure.microsoft.com/pricing/details/monitor/). If you only plan to query logs occasionally (for example, query logs for compliance auditing), you can consider reducing the total cost by exporting logs to storage account, and then using a serverless query solution on top of log data, for example, Azure Synapse.
+
+With Azure Synapse, you can create server-less SQL pool to query log data when you need. This could save costs significantly.
+
+1. Export logs to storage account. See [Creating a diagnostic setting](monitor-blob-storage.md#creating-a-diagnostic-setting).
+
+2. Create and configure a Synapse workspace. See [Quickstart: Create a Synapse workspace](../../synapse-analytics/quickstart-create-workspace.md).
+
+2. Query logs. See [Query JSON files using serverless SQL pool in Azure Synapse Analytics](../../synapse-analytics/sql/query-json-files.md).
+
+ Here's an example:
+
+ ```sql
+ select
+ JSON_VALUE(doc, '$.time') AS time,
+ JSON_VALUE(doc, '$.properties.accountName') AS accountName,
+ JSON_VALUE(doc, '$.identity.type') AS identityType,
+ JSON_VALUE(doc, '$.identity.requester.objectId') AS requesterObjectId,
+ JSON_VALUE(doc, '$.operationName') AS operationName,
+ JSON_VALUE(doc, '$.callerIpAddress') AS callerIpAddress,
+ JSON_VALUE(doc, '$.uri') AS uri
+ doc
+ from openrowset(
+ bulk 'https://demo2uswest4log.blob.core.windows.net/insights-logs-storageread/resourceId=/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/mytestrp/providers/Microsoft.Storage/storageAccounts/demo2uswest/blobServices/default/y=2021/m=03/d=19/h=*/m=*/PT1H.json',
+ format = 'csv', fieldterminator ='0x0b', fieldquote = '0x0b'
+ ) with (doc nvarchar(max)) as rows
+ order by JSON_VALUE(doc, '$.time') desc
+
+ ```
+
+## See also
+
+- [Monitoring Azure Blob Storage](monitor-blob-storage.md).
+- [Tutorial: Use Kusto queries in Azure Data Explorer and Azure Monitor](/azure/data-explorer/kusto/query/tutorial?pivots=azuredataexplorer).
+- [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
+
+
+
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/monitor-blob-storage.md
To collect resource logs, you must create a diagnostic setting. When you create
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, or an Azure Resource Manager template.
+You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
Here's an example:
To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+### [Azure Policy](#tab/policy)
+
+You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+ ## Analyzing metrics
You can read the metric values of your storage account or the Blob storage servi
N/A.
+### [Azure Policy](#tab/policy)
+
+N/A.
+ ## Analyzing logs
No. Azure Compute supports the metrics on disks. For more information, see [Per
- For a reference of the logs and metrics created by Azure Blob Storage, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md). - For details on monitoring Azure resources, see [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md). - For more information on metrics migration, see [Azure Storage metrics migration](../common/storage-metrics-migration.md).
+- For commons scenarios and best practices, see [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md).
storage Quickstart Blobs Javascript Browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/quickstart-blobs-javascript-browser.md
The shared access signature (SAS) is used by code running in the browser to auth
Follow these steps to get the Blob service SAS URL: 1. In the Azure portal, select your storage account.
-2. Navigate to the **Settings** section and select **Shared access signature**.
+2. Navigate to the **Security + networking** section and select **Shared access signature**.
3. Scroll down and click the **Generate SAS and connection string** button. 4. Scroll down further and locate the **Blob service SAS URL** field 5. Click the **Copy to clipboard** button at the far-right end of the **Blob service SAS URL** field.
For tutorials, samples, quickstarts, and other documentation, visit:
> [Azure for JavaScript documentation](/azure/developer/javascript/) * To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
-* To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
+* To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azurite.md
Title: Use Azurite emulator for local Azure Storage development description: The Azurite open-source emulator provides a free local environment for testing your Azure storage applications.-+ - Previously updated : 07/19/2021+ Last updated : 08/02/2021
The Azurite open-source emulator provides a free local environment for testing y
Azurite is the future storage emulator platform. Azurite supersedes the [Azure Storage Emulator](storage-use-emulator.md). Azurite will continue to be updated to support the latest versions of Azure Storage APIs.
-There are several different ways to install and run Azurite on your local system:
+There are several different ways to install and run Azurite on your local system. Select any of these tabs.
- 1. [Install and run the Azurite Visual Studio Code extension](#install-and-run-the-azurite-visual-studio-code-extension)
- 1. [Install and run Azurite by using NPM](#install-and-run-azurite-by-using-npm)
- 1. [Install and run the Azurite Docker image](#install-and-run-the-azurite-docker-image)
- 1. [Clone, build, and run Azurite from the GitHub repository](#clone-build-and-run-azurite-from-the-github-repository)
+## Install and run Azurite
-## Install and run the Azurite Visual Studio Code extension
+### [Visual Studio](#tab/visual-studio)
+
+In Visual Studio, create an Azure project such as an **Azure Functions** project.
+
+![New Azure Function project](media/storage-use-azurite/visual-studio-azure-function-project.png)
+
+Assuming that you create an **Azure Functions** project, make sure to select **Http trigger**. Then, in the **Authorization level** dropdown list, select **Anonymous**.
+
+![Function project settings](media/storage-use-azurite/visual-studio-azure-function-project-settings.png)
+
+Install [Node.js version 8.0 or later](https://nodejs.org). Node Package Manager (npm) is the package management tool included with every Node.js installation. After installing Node.js, execute the following `npm` command to install Azurite.
+
+```console
+npm install -g azurite
+```
+
+From the command line, start Azurite by using the following command:
+
+```console
+azurite
+```
+
+Output information similar to the following appears in the console.
+
+![Command line output](media/storage-use-azurite/azurite-command-line-output.png)
+
+Change to the [release build configuration](/visualstudio/debugger/how-to-set-debug-and-release-configurations#change-the-build-configuration), and then run the project.
+
+>[!NOTE]
+> If you start the project by using the debug build configuration, you might receive an error. That's because Visual Studio might try to start the legacy storage emulator that is built into Visual Studio. Any attempt to start the legacy emulator will be blocked because Azurite is using the listening ports that are required by the legacy storage emulator.
+
+The following image shows the command line output that appears when you run an Azure Function project.
+
+![Command line output after running project](media/storage-use-azurite/azurite-command-line-output-2.png)
+
+### [Visual Studio Code](#tab/visual-studio-code)
Within Visual Studio Code, select the **EXTENSIONS** pane and search for *Azurite* in the **EXTENSIONS:MARKETPLACE**.
The following settings are supported:
- **Azurite: Table Host** - The Table service listening endpoint, by default setting is 127.0.0.1. - **Azurite: Table Port** - The Table service listening port, by default 10002.
-## Install and run Azurite by using NPM
+### [npm](#tab/npm)
This installation method requires that you have [Node.js version 8.0 or later](https://nodejs.org) installed. Node Package Manager (npm) is the package management tool included with every Node.js installation. After installing Node.js, execute the following `npm` command to install Azurite.
npm install -g azurite
After installing Azurite, see [Run Azurite from a command line](#run-azurite-from-a-command-line).
-## Install and run the Azurite Docker image
+### [Docker Hub](#tab/docker-hub)
Use [DockerHub](https://hub.docker.com/) to pull the [latest Azurite image](https://hub.docker.com/_/microsoft-azure-storage-azurite) by using the following command:
docker run -p 10000:10000 mcr.microsoft.com/azure-storage/azurite \
For more information about configuring Azurite at start-up, see [Command-line options](#command-line-options).
-## Clone, build, and run Azurite from the GitHub repository
+### [GitHub](#tab/github)
This installation method requires that you have [Git](https://git-scm.com/) installed. Clone the [GitHub repository](https://github.com/azure/azurite) for the Azurite project by using the following console command.
npm install -g
After installing and building Azurite, see [Run Azurite from a command line](#run-azurite-from-a-command-line). ++ ## Run Azurite from a command line > [!NOTE]
-> Azurite cannot be run from the command line if you only installed the Visual Studio Code extension. Instead, use the Visual Studio Code command palette. For more information, see [Install and run the Azurite Visual Studio Code extension](#install-and-run-the-azurite-visual-studio-code-extension).
+> Azurite cannot be run from the command line if you only installed the Visual Studio Code extension. Instead, use the Visual Studio Code command palette.
To get started immediately with the command line, create a directory called *c:\azurite*, then launch Azurite by issuing the following command:
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-monitoring.md
To get the list of SMB and REST operations that are logged, see [Storage logged
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, or an Azure Resource Manager template.
+You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
> [!NOTE] > Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. See [Storage account overview](../common/storage-account-overview.md).
Here's an example:
To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+### [Azure Policy](#tab/policy)
+
+You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+ ## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
N/A.
+### [Azure Policy](#tab/policy)
+
+N/A.
+ ## Analyzing logs
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/queues/monitor-queue-storage.md
To collect resource logs, you must create a diagnostic setting. When you create
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, or an Azure Resource Manager template.
+You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
Here's an example:
To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+### [Azure Policy](#tab/policy)
+
+You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+ ## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
N/A.
+### [Azure Policy](#tab/policy)
+
+N/A.
+ ## Analyzing logs
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/monitor-table-storage.md
To collect resource logs, you must create a diagnostic setting. When you create
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, or an Azure Resource Manager template.
+You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
Here's an example:
To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+### [Azure Policy](#tab/policy)
+
+You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+ ## Analyzing metrics
The following example shows how to read metric data on the metric supporting mul
N/A.
+### [Azure Policy](#tab/policy)
+
+N/A.
+ ## Analyzing logs
synapse-analytics Linked Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/linked-service.md
Title: Secure a linked service description: Learn how to provision and secure a linked service with Managed VNet -+ Last updated 04/15/2020-+
synapse-analytics Connect To A Secure Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/connect-to-a-secure-storage-account.md
Title: Connect to a secure storage account from your Azure Synapse workspace description: This article will teach you how to connect to a secure storage account from your Azure Synapse workspace-+ Last updated 02/10/2021 -+
synapse-analytics Connectivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/connectivity-settings.md
Title: Azure Synapse connectivity settings description: An article that teaches you to configure connectivity settings in Azure Synapse Analytics -+ Last updated 03/15/2021 -+
synapse-analytics How To Connect To Workspace With Private Links https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md
Title: Connect to a Synapse workspace using private links description: This article will teach you how to connect to your Azure Synapse workspace using private links-+ Last updated 04/15/2020 -+
synapse-analytics How To Create Managed Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-create-managed-private-endpoints.md
Title: Create a Managed private endpoint to connect to your data source results description: This article will teach you how to create a Managed private endpoint to your data sources from an Azure Synapse workspace. -+ Last updated 04/15/2020 -+
synapse-analytics How To Grant Workspace Managed Identity Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions.md
Title: Grant permissions to managed identity in Synapse workspace description: An article that explains how to configure permissions for managed identity in Azure Synapse workspace. -+ Last updated 04/15/2020 -+
synapse-analytics How To Manage Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md
Title: How to manage Synapse RBAC assignments in Synapse Studio description: This article describes how to assign and revoke Synapse RBAC roles to AAD security principals-+ Last updated 12/1/2020-+
synapse-analytics How To Review Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-review-synapse-rbac-role-assignments.md
Title: How to review Synapse RBAC role assignments in Synapse Studio description: This article describes how to review Synapse RBAC role assignments using Synapse Studio-+ Last updated 12/1/2020-+
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-set-up-access-control.md
Title: How to set up access control for your Synapse workspace description: This article will teach you how to control access to a Synapse workspace using Azure roles, Synapse roles, SQL permissions, and Git permissions.- -+ Last updated 12/03/2020 -+
synapse-analytics Synapse Workspace Access Control Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
Title: Azure Synapse workspace access control overview description: This article describes the mechanisms used to control access to a Synapse workspace and the resources and code artifacts it contains. -+ Last updated 12/03/2020 -+ # Azure Synapse access control
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
Title: Configure IP firewall rules description: An article that teaches you to configure IP firewall rules in Azure Synapse Analytics -+ Last updated 04/15/2020 -+
synapse-analytics Synapse Workspace Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-managed-identity.md
Title: Managed identity in Synapse workspace description: An article that explains managed identity in Azure Synapse workspace-+ Last updated 10/16/2020 -+
synapse-analytics Synapse Workspace Managed Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-managed-private-endpoints.md
Title: Managed private endpoints description: An article that explains Managed private endpoints in Azure Synapse Analytics-+ Last updated 01/12/2020-+
synapse-analytics Synapse Workspace Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-managed-vnet.md
Title: Managed virtual network description: An article that explains Managed virtual network in Azure Synapse Analytics-+ Last updated 01/18/2021-+
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
Title: Synapse RBAC roles description: This article describes the built-in Synapse RBAC roles-+ Last updated 12/1/2020-+
synapse-analytics Synapse Workspace Synapse Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-synapse-rbac.md
Title: Synapse role-based access control description: An article that explains role-based access control in Azure Synapse Analytics-+ Last updated 12/1/2020-+ # What is Synapse role-based access control (RBAC)?
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Title: Understand the roles required to perform common tasks in Synapse description: This article describes which built-in Synapse RBAC role(s) are required to accomplish specific tasks-+ Last updated 12/1/2020-+ # Understand the roles required to perform common tasks in Synapse
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-data-types.md
Minimizing the size of data types shortens the row length, which leads to better
- Avoid defining character columns with a large default length. For example, if the longest value is 25 characters, then define your column as VARCHAR(25). - Avoid using [NVARCHAR][NVARCHAR] when you only need VARCHAR. - When possible, use NVARCHAR(4000) or VARCHAR(8000) instead of NVARCHAR(MAX) or VARCHAR(MAX).
+- Avoid using floats and decimals with 0 (zero) scale. These should be TINYINT, SMALLINT, INT or BIGINT.
> [!NOTE] > If you are using PolyBase external tables to load your Synapse SQL tables, the defined length of the table row cannot exceed 1 MB. When a row with variable-length data exceeds 1 MB, you can load the row with BCP, but not with PolyBase.
The following list shows the data types that Synapse SQL does not support and gi
## Next steps
-For more information on developing tables, see [Table Overview](develop-overview.md).
+For more information on developing tables, see [Table Overview](develop-overview.md).
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 04/09/2021 Last updated : 08/02/2021
To enable media optimization for Teams, set the following registry key on the ho
### Install the Teams WebSocket Service
-Install the latest [Remote Desktop WebRTC Redirector Service](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4AQBt) on your VM image. If you encounter an installation error, install the [latest Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) and try again.
+Install the latest [Remote Desktop WebRTC Redirector Service](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWFYsj) on your VM image. If you encounter an installation error, install the [latest Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) and try again.
#### Latest WebSocket Service versions
The following table lists the latest versions of the WebSocket Service:
|Version |Release date | ||--|
+|1.0.2106.14001 |07/29/2021 |
|1.0.2006.11001 |07/28/2020 | |0.11.0 |05/29/2020 |
+#### Updates for version 1.0.2106.14001
+
+Increased the connection reliability between the WebRTC redirector service and the WebRTC client plugin.
+ #### Updates for version 1.0.2006.11001 - Fixed an issue where minimizing the Teams app during a call or meeting caused incoming video to drop.
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/iaas-antimalware-windows.md
vm-windows Previously updated : 02/23/2021 Last updated : 07/30/2021
The following example assumes the VM extension is nested inside the virtual mach
"settings": { "AntimalwareEnabled": "true", "Exclusions": {
- "Extensions": ".log;.ldf",
- "Paths": "D:\\IISlogs;D:\\DatabaseLogs",
- "Processes": "mssence.svc"
+ "Extensions": ".ext1;.ext2",
+ "Paths": "c:\excluded-path-1;c:\excluded-path-2",
+ "Processes": "excludedproc1.exe;excludedproc2.exe"
}, "RealtimeProtectionEnabled": "true",
The following example assumes the VM extension is nested inside the virtual mach
} } ```
+You must include, at a minimum, the following content to enable the Microsoft Antimalware extension:
+
+`{ "AntimalwareEnabled": true }`
+
+Microsoft Antimalware JSON configuration sample:
+
+```json
+{ "AntimalwareEnabled": true, "RealtimeProtectionEnabled": true, "ScheduledScanSettings": { "isEnabled": true, "day": 1, "time": 120, "scanType": "Full" },
+
+"Exclusions": { "Extensions": ".ext1;.ext2", "Paths": "c:\excluded-path-1;c:\excluded-path-2", "Processes": "excludedproc1.exe;excludedproc2.exe" }
+}
+```
++
+AntimalwareEnabled
+
+- required parameter
+- Values: true/false
+
+ - true = Enable
+ - false = Error out, as false is not a supported value
+
+RealtimeProtectionEnabled
+
+- Values: true/false, default is true
+
+ - true = Enable
+ - false = Disable
+
+ScheduledScanSettings
+
+- isEnabled = true/false
+- day = 0-8 (0-daily, 1-Sunday, 2-Monday, ...., 7-Saturday, 8-Disabled)
+- time = 0-1440 (measured in minutes after midnight - 60->1AM, 120 -> 2AM, ... )
+- scanType = Quick/Full, default is Quick
+
+- If isEnabled = true is the only setting provided, the following defaults are set: day=7 (Saturday), time=120 (2 AM), scanType="Quick"
+
+Exclusions
+
+- Multiple exclusions in the same list are specified by using semicolon delimiters
+- If no exclusions are specified, then the existing exclusions, if any, are overwritten by blank on the system
## PowerShell deployment Depends on your type of deployment, use the corresponding commands to deploy the Azure Antimalware virtual machine extension to an existing virtual machine.
virtual-machines Sap Iq Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-iq-deployment-guide.md
+
+ Title: Implement SAP NLS/SDA with SAP IQ on Azure | Microsoft Docs
+description: Plan, deploy, and configure SAP NLS/SDA solution with SAP IQ on Azure.
+
+documentationcenter: saponazure
++
+editor: ''
+tags: azure-resource-manager
+keywords: ''
++
+ vm-windows
+ Last updated : 06/11/2021+++
+# SAP BW-Near Line Storage (NLS) implementation guide with SAP IQ on Azure
+
+## Overview
+
+Over the years customer running SAP BW system sees an exponential growth in database size that results in an increase in compute cost. To achieve the perfect balance of cost and performance, customer can use near-line storage (NLS) to migrate the historical data. The NLS implementation based on SAP IQ is the standard method by SAP to move historical data from primary database (SAP HANA or anyDB). The adapter of SAP IQ as a near-line solution is delivered with the SAP BW system. The integration of SAP IQ makes it possible to separate frequently accessed-data from infrequently one, which makes less resource demand in SAP BW system.
+
+This guide will provide you the guidelines for planning, deploying, and configuring SAP BW near-line storage (NLS) with SAP IQ on Azure. This guide is intended to cover common Azure services and features that are relevant for SAP IQ - NLS deployment and doesn't cover any NLS partner solutions. This guide isnΓÇÖt intended to replace SAPΓÇÖs standard documentation on NLS deployment with SAP IQ, instead it complements its official installation and administration documentations.
+
+## Solution overview
+
+In an operative BW system, the volume of data increases constantly and this data is required for longer period because of the business and legal requirement. The large volume of data can affect the performance of the system and increases the administration effort, which results in the need of implementing a data aging strategy. If you want to keep the amount of data in your SAP BW system without deleting, you can use data archiving. The data is first moved to archive or near-line storage and then deleted from the BW system. You can either access the data directly or load it back as required depending on how the data has been archived.
+
+SAP BW users can use SAP IQ as a near-line storage (NLS) solution. The adapter for SAP IQ as a near-line solution is delivered with the BW system. With NLS implemented, frequently used data is stored in SAP BW online database (SAP HANA or any DB) while infrequently accessed data is stored in SAP IQ, which reduces the cost to manage data and improves the performance of SAP BW system. To ensure consistency between online data and near-line data, the archived partitions are locked and are read-only.
+
+SAP IQ supports two type of architecture - simplex and multiplex. In simplex architecture, a single instance of SAP IQ server runs on a single virtual machine and files may be located on a host machine or on a network storage device.
+
+> [!Important]
+> For SAP-NLS solution, only simplex architecture is available/evaluated by SAP.
+
+![SAP IQ solution overview](media/sap-iq-deployment-guide/sap-iq-solution-overview.png)
+
+In Azure, SAP IQ server must be implemented on a separate virtual machine (VM). It's not recommended to install SAP IQ software on an existing server that already has other database instance running, as SAP IQ uses complete CPU and memory for its own usage. One SAP IQ server can be used for multiple SAP-NLS implementations.
+
+## Support matrix
+
+This section describes glance on the support matrix for SAP IQ-NLS solution, which is covered in more detail in this document. Also, check [product availability matrix (PAM)](https://userapps.support.sap.com/sap/support/pam) for more up-to-date information based on your SAP IQ release.
+
+- **Operating system**: SAP IQ is certified at the operating system level only. You can run SAP IQ certified operating system on Azure environment as long as theyΓÇÖre compatible to run on Azure infrastructure. For more information, see SAP Note [2133194](https://launchpad.support.sap.com/#/notes/2133194).
+
+- **SAP BW compatibility**: Near-line storage for SAP IQ is released only for SAP BW systems that already run under Unicode. Follow SAP note [1796393](https://launchpad.support.sap.com/#/notes/1796393) that contains information about SAP BW.
+
+- **Storage**: In Azure, SAP IQ supports premium-managed disk (Windows/Linux), Azure shared disk (Windows only), and Azure NetApp Files (NFS - Linux only).
+
+## Sizing
+
+Sizing of SAP IQ is confined to CPU, memory, and storage. The general sizing guidelines for SAP IQ on Azure can be found in SAP note [1951789](https://launchpad.support.sap.com/#/notes/1951789). The sizing recommendation you get by following the guidelines needs to be mapped to certified Azure virtual machine types for SAP. SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533) provides the list of supported SAP products and Azure VM types.
+
+> [!Tip]
+>
+> For productive system, we recommend you to use E-Series virtual machines due to its core-to-memory ratio.
+
+SAP IQ sizing guide and sizing worksheet mentioned in SAP Note [1951789](https://launchpad.support.sap.com/#/notes/1951789) are developed for the native usage of the SAP IQ database and doesn't reflect the resources for the planning of <SID>IQ database, you might end up with unused resources for SAP-NLS.
+
+## Azure resources
+
+### Choosing regions
+
+If youΓÇÖre already running your SAP systems on Azure, probably you have your region identified. SAP IQ deployment must be on the same region as that of your SAP BW system for which you're implementing NLS solution. But you need to investigate, that the necessary services required by SAP IQ like Azure NetApp Files (NFS - Linux only) are available in those regions to decide the architecture of SAP IQ. To check the service availability in your region, you can check [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) site.
+
+### Availability sets
+
+To achieve redundancy of SAP systems in Azure infrastructure, application needs to be deployed either in availability sets or availability zones. Technically SAP IQ high availability can be achieved using the IQ multiplex architecture, but the multiplex architecture doesnΓÇÖt meet the requirement of NLS solution. To achieve high availability for SAP IQ simplex architecture, two node cluster with custom solution needs to be configured. The two node SAP IQ cluster can be deployed in availability sets or availability zones, but the Azure storage that gets attached to the nodes decides its deployment method. Currently Azure shared premium disk and Azure NetApp Files don't support zonal deployment, which only leave the SAP IQ deployment in availability set option.
+
+### Virtual machines
+
+Based on SAP IQ sizing, you need to map your requirement to Azure virtual machines, which is supported in Azure for SAP product. SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533) is a good starting point that lists supported Azure VM types for SAP products on Windows and Linux. Also a point to keep in mind that beyond the selection of purely supported VM types, you also need to check whether those VM types are available in specific region. You can check the availability of VM type on [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) page. For choosing the pricing model, you can refer to [Azure virtual machines for SAP workload](planning-guide.md#azure-virtual-machines-for-sap-workload).
+
+For productive system, we recommend you to use E-Series virtual machines because of its core-to-memory ratio.
+
+### Storage
+
+Azure storage has different storage types available for customers and details for the same can be read in the article [What disk types are available in Azure?](../../disks-types.md). Some of the storage types have limited use for SAP scenarios. But several Azure Storage types are well suited or optimized for specific SAP workload scenarios. For more information, refer [Azure Storage types for SAP Workload](planning-guide-storage.md) guide, as it highlights different storage options that are suited for SAP. For SAP IQ on Azure, following Azure storage can be used based on your operating system (Windows or Linux) and deployment method (standalone or highly available).
+
+- Azure-managed disks
+
+ It's a block-level storage volume that is managed by Azure. You can use Azure-managed disks for SAP IQ simplex deployment. There are different types of [Azure Managed Disks](../../managed-disks-overview.md) available, but it's recommended to use [Premium SSDs](../../disks-types.md#premium-ssd) for SAP IQ.
+
+- Azure shared disks
+
+ [Azure shared disks](../../disks-shared.md) is a new feature for Azure-managed disks that allow you to attach a managed disk to multiple virtual machines (VMs) simultaneously. Shared managed disks do not natively offer a fully managed file system that can be accessed using SMB/NFS. You need to use cluster manager like [Windows server failover cluster](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/failover-clustering/failover-clustering-overview.md) (WSFC) that handles cluster node communication and write locking. To deploy highly available solution for SAP IQ simplex architecture on Windows, you can use Azure shared disks between two nodes that are managed by WSFC. SAP IQ deployment architecture with Azure shared disk is discussed in the article [Deploy SAP IQ NLS HA Solution using Azure shared disk on Windows Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-shared-disk-on-windows/ba-p/2433089)
+
+- Azure NetApp Files
+
+ SAP IQ deployment on Linux can use [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md) as a file system (NFS protocol) to install as standalone or a highly available solution. As this storage offering isn't available in all regions, refer to [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) site to find out up-to-date information. SAP IQ deployment architecture with Azure NetApp Files is discussed in the article [Deploy SAP IQ-NLS HA Solution using Azure NetApp Files on SUSE Linux Enterprise Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-netapp-files-on-suse/ba-p/1651172).
+
+The following table lists the recommendation of each storage type based on the operating system.
+
+| Storage type | Windows | Linux |
+| - | - | -- |
+| Azure-managed disks | Yes | Yes |
+| Azure shared disks | Yes | No |
+| Azure NetApp Files | No | Yes |
+
+### Networking
+
+Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized for SAP BW system that use SAP IQ as near-line storage like connecting to on-premise system, systems in different virtual network and others. For more information, see [Microsoft Azure Networking for SAP Workload](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/workloads/sap/planning-guide.md#microsoft-azure-networking).
+
+## Deploy SAP IQ on Windows
+
+### Server preparation and installation
+
+Follow the latest guide by SAP to prepare servers for NLS implementation with SAP IQ on Windows. For the most up-to-date information, refer to first guidance document published by SAP, which you can find in SAP Note [2780668 - SAP First Guidance - BW NLS Implementation with SAP IQ](https://launchpad.support.sap.com/#/notes/0002780668). It covers the comprehensive information related to pre-requisites for SAP BW systems, IQ filesystem layout, installation, post configuration, and BW NLS integration with IQ.
+
+### High-availability deployment
+
+SAP IQ supports both a simplex and a multiplex architecture. For NLS solution, only simplex server architecture is available and evaluated. Simplex is a single instance of an SAP IQ server running on a single virtual machine. Technically SAP IQ high availability can be achieved using multiplex server architecture, but the multiplex architecture doesn't meet the requirement for NLS solution. For simplex server architecture, SAP doesn't provide any features or procedures to run the SAP IQ in high availability configuration.
+
+To set up SAP IQ high availability on Windows for simplex server architecture, you need to set up a custom solution, which requires extra configuration like Microsoft Windows Server Failover Cluster, shared disk and so on. One such custom solution for SAP IQ on Windows is described in details in [Deploy SAP IQ NLS HA Solution using Azure shared disk on Windows Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-shared-disk-on-windows/ba-p/2433089) blog.
+
+### Back up and restore
+
+In Azure, you can schedule SAP IQ database backup as described by SAP in [IQ Administration: Backup, Restore, and Data Recovery](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/5b8309b37f4e46b089465e380c24df59.html). SAP IQ provides different types of database backups and details about of each backup type can be found in [Backup Scenarios](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880dc1f84f21015af84f1a6b629dd7a.html).
+
+- Full backup - It makes a complete copy of the database.
+- Incremental backup - It copies all transactions since the last backup of any type.
+- Incremental since full backup - It back up all changes to the database since the last full backup.
+- Virtual backup - It copies all of the database except the table data and metadata from the IQ store.
+
+Depending on your IQ database size, you can schedule your database backup from any of the backup scenarios. But if you are using SAP IQ with NLS interface delivered by SAP and want to automate the backup process for IQ database, which ensures that the SAP IQ database can always be recovered to a consistent state without data loss with respect to the data movement processes between the primary database and the SAP IQ database. Refer SAP Note [2741824 - How to setup backup automation for SAP IQ Cold Store/Near-line Storage](https://launchpad.support.sap.com/#/notes/2741824), which provide details on setting up automation for SAP IQ near-line storage.
+
+For large IQ database, you can use virtual backup in SAP IQ. For more information on virtual backup, see [Virtual Backups](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880672184f21015a08dceedc7d19776.html), [Introduction Virtual Backup in SAP Sybase IQ](https://wiki.scn.sap.com/wiki/display/SYBIQ/Introduction+Virtual+BackUp+(+general++back+up+method+)+in+SAP+Sybase+IQ) and SAP Note [2461985 - How to Backup Large SAP IQ Database](https://launchpad.support.sap.com/#/notes/0002461985).
+
+If you are using network drive (SMB protocol) to back up and restore SAP IQ server on Windows, make sure to use UNC path for backup. Three backslash ΓÇÿ\\\\\ΓÇÖ is required when using UNC path for backup/restore.
+
+```sql
+BACKUP DATABASE FULL TO '\\\sapiq.internal.contoso.net\sapiq-backup\backup\data\<filename>'
+```
+
+## Deploy SAP IQ on Linux
+
+### Server preparation and installation
+
+Follow the latest guide by SAP to prepare servers for NLS implementation with SAP IQ on Linux. For the most up-to-date information, refer to first guidance document published by SAP, which you can find in SAP Note [2780668 - SAP First Guidance - BW NLS Implementation with SAP IQ](https://launchpad.support.sap.com/#/notes/0002780668). It covers the comprehensive information related to pre-requisites for SAP BW systems, IQ filesystem layout, installation, post configuration, and BW NLS integration with IQ.
+
+### High-availability deployment
+
+SAP IQ supports both a simplex and a multiplex architecture. For NLS solution, only simplex server architecture is available and evaluated. Simplex is a single instance of an SAP IQ server running on a single virtual machine. Technically SAP IQ high availability can be achieved using multiplex server architecture, but the multiplex architecture does not suffice the requirement for NLS solution. For simplex server architecture, SAP does not provide any features or procedures to run the SAP IQ in high availability configuration.
+
+To set up SAP IQ high availability on Linux for simplex server architecture, you need to set up custom solution, which requires extra configuration like pacemaker. One such custom solution for SAP IQ on Linux is described in details in [Deploy SAP IQ-NLS HA Solution using Azure NetApp Files on SUSE Linux Enterprise Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-netapp-files-on-suse/ba-p/1651172) blog.
+
+### Back up and restore
+
+In Azure, you can schedule SAP IQ database backup as described by SAP in [IQ Administration: Backup, Restore, and Data Recovery](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/5b8309b37f4e46b089465e380c24df59.html). SAP IQ provides different types of database backups and details about of each backup type can be found in [Backup Scenarios](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880dc1f84f21015af84f1a6b629dd7a.html).
+
+- Full backup - It makes a complete copy of the database.
+- Incremental backup - It copies all transactions since the last backup of any type.
+- Incremental since full backup - It back up all changes to the database since the last full backup.
+- Virtual backup - It copies all of the database except the table data and metadata from the IQ store.
+
+Depending on your IQ database size, you can schedule the database backup. But if you're using SAP IQ with NLS interface delivered by SAP and want to automate the backup process for IQ database, which ensures that the SAP IQ database can always be recovered to a state without data loss with respect to the data movement processes between the primary database and the SAP IQ database. Refer SAP Note [2741824 - How to setup backup automation for SAP IQ Cold Store/Near-line Storage](https://launchpad.support.sap.com/#/notes/2741824), which provide details on setting up automation for SAP IQ near-line storage.
+
+For large IQ database, you can use virtual backup in SAP IQ. For more information on virtual backup, see [Virtual Backups](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880672184f21015a08dceedc7d19776.html), [Introduction Virtual Backup in SAP Sybase IQ](https://wiki.scn.sap.com/wiki/display/SYBIQ/Introduction+Virtual+BackUp+(+general++back+up+method+)+in+SAP+Sybase+IQ) and SAP Note [2461985 - How to Backup Large SAP IQ Database](https://launchpad.support.sap.com/#/notes/0002461985).
+
+## Disaster recovery
+
+This section explains the strategy to provide disaster recovery (DR) protection for SAP IQ - NLS solution. It complements the [Disaster recovery for SAP](../../../site-recovery/site-recovery-sap.md) document, which represents the primary resources for an overall SAP DR approach. The process described in this document is presented at an abstract level. But you need to validate the exact steps and do thorough test of your DR strategy.
+
+For SAP IQ, see SAP Note [2566083](https://launchpad.support.sap.com/#/notes/0002566083), which describes methods to implement a DR environment safely. In Azure, you can also use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) for SAP IQ DR strategy. The strategy for SAP IQ DR depends on the way it's deployed in Azure, and it should also be in line with your SAP BW system.
+
+- Standalone deployment of SAP IQ
+
+ If you have installed SAP IQ as a standalone system that doesn't have any application-level redundancy or high availability but business requires a DR setup. On a standalone IQ system all the disks (Azure-managed disks) attached to the virtual machine will be local. [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) can be used to replicate standalone SAP IQ virtual machine on the secondary region. It replicates the servers and all the attached managed disks to the secondary region so that when disasters or an outage occur, you can easily fail over to your replicated environment and continue working. To start replicating the SAP IQ VMs to the Azure DR region, follow the guidance in [Replicate a virtual machine to Azure](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md).
+
+- Highly available deployment of SAP IQ
+
+ If you have installed SAP IQ as a highly available system where IQ binaries and database files are on Azure shared disk (Windows only) or on the network drive like Azure NetApp Files (Linux only). In such setup, you need to identify whether you need a same highly available SAP IQ on DR site, or a standalone SAP IQ will suffice your business requirement. In case you need standalone SAP IQ on DR site, you can use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) to replicate primary SAP IQ virtual machine on the secondary region. It replicates the servers and all the local attached managed disks to the secondary region, but it wonΓÇÖt replicate Azure shared disk or network drive like Azure NetApp Files. To copy data from Azure shared disk or network drive, you can use any file-base copy tool to replicate data between Azure regions. For more information on how to copy Azure NetApp Files volume in another region, see [FAQs about Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-faqs.md#how-do-i-create-a-copy-of-an-azure-netapp-files-volume-in-another-azure-region).
+
+## Next steps
+
+- [Set up disaster recovery for a multi-tier SAP app deployment](../../../site-recovery/site-recovery-sap.md)
+- [Azure Virtual Machines planning and implementation for SAP](planning-guide.md)
+- [Azure Virtual Machines deployment for SAP](deployment-guide.md)
virtual-network Virtual Networks Name Resolution Ddns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-name-resolution-ddns.md
Title: Using dynamic DNS to register hostnames in Azure | Microsoft Docs
description: Learn how to setup dynamic DNS to register hostnames in your own DNS servers. documentationcenter: na-+ editor: ''
na Last updated 02/23/2017-+ # Use dynamic DNS to register hostnames in your own DNS server
virtual-wan Manage Secure Access Resources Spoke P2s https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/manage-secure-access-resources-spoke-p2s.md
In this section, you generate and download the configuration profile files. Thes
## <a name="clients"></a>Configure VPN clients
-Use the downloaded profile to configure the remote access clients. The procedure for each operating system is different, follow the instructions that apply to your system. The following instructions are for Windows VPN clients.
+Use the downloaded profile to configure the remote access clients. The procedure for each operating system is different, follow the instructions that apply to your system.
[!INCLUDE [Configure clients](../../includes/virtual-wan-p2s-configure-clients-include.md)]
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN supports up to 20 Gbps aggregate throughput both for VPN and Express
A virtual network gateway VPN is limited to 30 tunnels. For connections, you should use Virtual WAN for large-scale VPN. You can connect up to 1,000 branch connections per region (virtual hub) with aggregate of 20 Gbps per hub. A connection is an active-active tunnel from the on-premises VPN device to the virtual hub. You can have one hub per region, which means you can connect more than 1,000 branches across hubs.
+### What is the recommended Packets per Second limit per IPSEC tunnel?
+
+It is recommended to send around 95,000 PPS with GCMAES256 algorithm for both IPSEC Encryption and Integrity for optimal performance. Though traffic is not blocked if greater than 95,000 PPS are sent, performance degradation such as latency and packet drops can be expected. Please create additional tunnels if greater PPS is required.
++ ### What is a Virtual WAN gateway scale unit? A scale unit is a unit defined to pick an aggregate throughput of a gateway in Virtual hub. 1 scale unit of VPN = 500 Mbps. 1 scale unit of ExpressRoute = 2 Gbps. Example: 10 scale unit of VPN would imply 500 Mbps * 10 = 5 Gbps
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-point-to-site-portal.md
Previously updated : 07/29/2021 Last updated : 08/02/2021 # Tutorial: Create a User VPN connection using Azure Virtual WAN
-This tutorial shows you how to use Virtual WAN to connect to your resources in Azure over an IPsec/IKE (IKEv2) or OpenVPN VPN connection. This type of connection requires the VPN client to be configured on the client computer. For more information about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md).
+This tutorial shows you how to use Virtual WAN to connect to your resources in Azure over an OpenVPN or IPsec/IKE (IKEv2) VPN connection using a User VPN (P2S) configuration. This type of connection requires the native VPN client to be configured on each connecting client computer. For more information about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md).
In this tutorial, you learn how to: > [!div class="checklist"] > * Create a virtual WAN
-> * Create a P2S configuration
-> * Create a virtual hub
-> * Generate VPN client profile configuration package
+> * Create the User VPN configuration
+> * Create the virtual hub and gateway
+> * Generate client configuration files
> * Configure VPN clients
+> * Connect to a VNet
> * View your virtual WAN > * Modify settings
-![Virtual WAN diagram](./media/virtual-wan-about/virtualwanp2s.png)
## Prerequisites [!INCLUDE [Before beginning](../../includes/virtual-wan-before-include.md)]
-## <a name="wan"></a>Create a virtual WAN
+## <a name="wan"></a>Create virtual WAN
[!INCLUDE [Create a virtual WAN](../../includes/virtual-wan-create-vwan-include.md)]
-## <a name="p2sconfig"></a>Create a P2S configuration
+## <a name="p2sconfig"></a>Create User VPN configuration
-A point-to-site (P2S) configuration defines the parameters for connecting remote clients.
+The User VPN (P2S) configuration defines the parameters for remote clients to connect. The instructions you follow depend on the authentication method you want to use.
+
+In the following steps, when selecting the authentication method, you have three choices. Each method has specific requirements. Select one of the following methods, and then complete the steps.
+
+* **Azure Active Directory authentication:** Obtain the following information:
+
+ * The **Application ID** of the Azure VPN Enterprise Application registered in your Azure AD tenant.
+ * The **Issuer**. Example: `https://sts.windows.net/your-Directory-ID`.
+ * The **Azure AD tenant**. Example: `https://login.microsoftonline.com/your-Directory-ID`.
+
+ For more information, see [Configure Azure AD authentication](virtual-wan-point-to-site-azure-ad.md) and [Prepare Azure AD tenant - OpenVPN](openvpn-azure-ad-tenant.md)
+
+* **Radius-based authentication:** Obtain the Radius server IP, Radius server secret, and certificate information.
+
+* **Azure certificates:** For this configuration, certificates are required. You need to either generate or obtain certificates. A client certificate is required for each client. Additionally, the root certificate information (public key) needs to be uploaded. For more information about the required certificates, see [Generate and export certificates](certificates-point-to-site.md).
[!INCLUDE [Create P2S configuration](../../includes/virtual-wan-p2s-configuration-include.md)]
A point-to-site (P2S) configuration defines the parameters for connecting remote
[!INCLUDE [Create hub](../../includes/virtual-wan-p2s-hub-include.md)]
-## <a name="download"></a>Generate VPN client profile package
+## <a name="download"></a>Generate client configuration files
-Generate and download the VPN client profile package to configure your VPN clients.
+When you connect to VNet using User VPN (P2S), you use the VPN client that is natively installed on the operating system from which you are connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients.
[!INCLUDE [Download profile](../../includes/virtual-wan-p2s-download-profile-include.md)] ## <a name="configure-client"></a>Configure VPN clients Use the downloaded profile package to configure the remote access VPN clients. The procedure for each operating system is different. Follow the instructions that apply to your system.
-Once you have finished configuring your client, you can connect. The following instructions are for Windows VPN clients.
+Once you have finished configuring your client, you can connect.
[!INCLUDE [Configure clients](../../includes/virtual-wan-p2s-configure-clients-include.md)]
-## <a name="viewwan"></a>View your virtual WAN
+## <a name="connect-vnet"></a>Connect to VNet
+
+In this section, you create a connection between your virtual hub and your VNet. For this tutorial, you do not need to configure the routing settings.
++
+## <a name="viewwan"></a>View virtual WAN
+
+1. Navigate to your **virtual WAN**.
-1. Navigate to the virtual WAN.
1. On the **Overview** page, each point on the map represents a hub.+ 1. In the **Hubs and connections** section, you can view hub status, site, region, VPN connection status, and bytes in and out.
-## To modify settings
+## Modify settings
### <a name="address-pool"></a>Modify client address pool
Once you have finished configuring your client, you can connect. The following i
### <a name="dns"></a>Modify DNS servers
-1. Navigate to your **Virtual HUB -> User VPN (Point to site)**, then click **Configure**.
+1. Navigate to your **Virtual HUB -> User VPN (Point to site)**.
+
+1. Click the value next to **Custom DNS Servers** to open the **Edit User VPN gateway** page.
+
+1. On the **Edit User VPN gateway** page, edit the **Custom DNS Servers** field. Enter the DNS server IP addresses in the **Custom DNS Servers** text boxes. You can specify up to five DNS Servers.
-1. On the **Edit User VPN gateway** page, edit the **Custom DNS Servers** field. Enter the DNS server IP address(es) in the **Custom DNS Servers** text box(es). You can specify up to five DNS Servers.
+1. Click **Edit** at the bottom of the page to validate your settings.
-1. Click **Edit** at the bottom of the page to validate your settings. Then, click to update this setting.
+1. Click **Confirm** to save your settings. Any changes on this page could take up to 30 minutes to complete.
## <a name="cleanup"></a>Clean up resources
When you no longer need the resources that you created, delete them. Some of the
## Next steps
-To connect a virtual network to a hub, see:
> [!div class="nextstepaction"]
-> * [Connect a VNet to a hub](howto-connect-vnet-hub.md)
+> * [Manage secure access to resources in spoke VNets](manage-secure-access-resources-spoke-p2s.md)